run-main-0)scala.ScalaReflectionException:带有ClasspathFilter()的JavaMirror中的类java.sql.Date

时间:2018-11-24 12:10:31

标签: scala apache-spark

嗨,我有老师给的文件。它与Scala和Spark有关。 当我运行代码时,它给了我这个例外:

  (run-main-0) scala.ScalaReflectionException: class java.sql.Date in 
  JavaMirror with ClasspathFilter 

文件本身如下所示:

import org.apache.spark.ml.feature.Tokenizer
import org.apache.spark.sql.Dataset
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.types._
object Main {
   type Embedding       = (String, List[Double])
   type ParsedReview    = (Integer, String, Double)
   org.apache.log4j.Logger getLogger "org"  setLevel 
   (org.apache.log4j.Level.WARN)
   org.apache.log4j.Logger getLogger "akka" setLevel 
  (org.apache.log4j.Level.WARN)
   val spark =  SparkSession.builder
     .appName ("Sentiment")
     .master  ("local[9]")
     .getOrCreate

import spark.implicits._

val reviewSchema = StructType(Array(
        StructField ("reviewText", StringType, nullable=false),
        StructField ("overall",    DoubleType, nullable=false),
        StructField ("summary",    StringType, nullable=false)))

// Read file and merge the text abd summary into a single text column

def loadReviews (path: String): Dataset[ParsedReview] =
    spark
        .read
        .schema (reviewSchema)
        .json (path)
        .rdd
        .zipWithUniqueId
        .map[(Integer,String,Double)] { case (row,id) => (id.toInt, s"${row getString 2} ${row getString 0}", row getDouble 1) }
        .toDS
        .withColumnRenamed ("_1", "id" )
        .withColumnRenamed ("_2", "text")
        .withColumnRenamed ("_3", "overall")
        .as[ParsedReview]

 // Load the GLoVe embeddings file

 def loadGlove (path: String): Dataset[Embedding] =
     spark
         .read
         .text (path)
    .map  { _ getString 0 split " " }
    .map  (r => (r.head, r.tail.toList.map (_.toDouble))) // yuck!
         .withColumnRenamed ("_1", "word" )
         .withColumnRenamed ("_2", "vec")
         .as[Embedding]

def main(args: Array[String]) = {

  val glove  = loadGlove ("Data/glove.6B.50d.txt") // take glove 

  val reviews = loadReviews ("Data/Electronics_5.json") // FIXME

  // replace the following with the project code



   glove.show
   reviews.show

        spark.stop
   }

 }

我需要保持一致       导入org.apache.spark.sql.Dataset 因为有些代码依赖于它,但正是因为它,我抛出异常。

我的build.sbt文件如下:

  name := "Sentiment Analysis Project"

  version := "1.1"

  scalaVersion := "2.11.12"

  scalacOptions ++= Seq("-unchecked", "-deprecation")

  initialCommands in console := 
  """
  import Main._
  """

   libraryDependencies += "org.apache.spark" %% "spark-core" % "2.3.0"

   libraryDependencies += "org.apache.spark" %% "spark-mllib" % 
   "2.3.0"

    libraryDependencies += "org.scalactic" %% "scalactic" % "3.0.5"

    libraryDependencies += "org.scalatest" %% "scalatest" % "3.0.5" % 
    "test"

1 个答案:

答案 0 :(得分:1)

我正在使用OpenJDK 11.0.1。我将其卸载并成功。

您可以通过运行来检查当前的Java版本

java -version

如果安装了brew,则可以通过运行以下命令删除OpenJDK:

brew cask uninstall java

为确保已安装Java 1.8.0,请运行以下命令:

brew cask install java8