在对dataframe spark执行操作时获取Null指针异常

时间:2018-05-17 07:50:50

标签: apache-spark apache-spark-sql spark-streaming apache-spark-dataset

我正在使用以下代码从RDD创建数据框。我能够对RDD执行操作,RDD不为空。

我尝试了以下两种方法。 两者都有同样的例外。

方法1:使用sparkSession.createDataframe()构建数据集。

System.out.println("RDD Count: " + rdd.count());
        Dataset<Row> rows = applicationSession
                .getSparkSession().createDataFrame(rdd,  data.getSchema()).toDF(data.convertListToSeq(data.getColumnNames()));
        rows.createOrReplaceTempView(createStagingTableName(sparkTableName));
        rows.show();
        rows.printSchema();

方法2:使用Hive Context创建数据集。

System.out.println("RDD Count: " + rdd.count());
    System.out.println("Create view using HiveContext..");
    Dataset<Row> rows = applicationSession.gethiveContext().applySchema(rdd, data.getSchema());

我可以使用这两个apporaches打印上述数据集的模式。 不确定究竟是什么导致空指针异常。

Show()方法在内部调用take()方法,该方法抛出空指针异常。 但为什么这个数据集填充为NULL?如果RDD包含值,那么它不应该为空。

这是一种奇怪的行为。

以下是相同的日志。

RDD Count: 35

此外,我可以在本地模式下运行上面的代码,没有任何例外它工作正常。

只要我在Yarn上部署此代码,我就会开始获得以下异常。

我能够创建数据帧,即使我能够注册视图也是如此。 一旦我对此数据集执行了rows.show()或rows.count()操作,我就会收到以下错误。

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1517)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1505)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1504)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1504)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:814)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:814)
    at scala.Option.foreach(Option.scala:257)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:814)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1732)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1687)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1676)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:630)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2029)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2050)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2069)
    at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:336)
    at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38)
    at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:2861)
    at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2150)
    at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2150)
    at org.apache.spark.sql.Dataset$$anonfun$55.apply(Dataset.scala:2842)
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:65)
    at org.apache.spark.sql.Dataset.withAction(Dataset.scala:2841)
    at org.apache.spark.sql.Dataset.head(Dataset.scala:2150)
    at org.apache.spark.sql.Dataset.take(Dataset.scala:2363)
    at org.apache.spark.sql.Dataset.showString(Dataset.scala:241)
    at org.apache.spark.sql.Dataset.show(Dataset.scala:637)
    at org.apache.spark.sql.Dataset.show(Dataset.scala:596)
    at org.apache.spark.sql.Dataset.show(Dataset.scala:605)
Caused by: java.lang.NullPointerException
    at org.apache.spark.sql.SparkSession$$anonfun$3.apply(SparkSession.scala:469)
    at org.apache.spark.sql.SparkSession$$anonfun$3.apply(SparkSession.scala:469)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:235)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:228)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:108)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:338)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

我在这里做错了吗? 请建议。

1 个答案:

答案 0 :(得分:-1)

您可以发布数据帧架构吗?问题在于您正在使用的架构字符串以及用于拆分架构字符串的分隔符。