从系统目录读取输入数据时获取异常

时间:2017-07-14 06:09:40

标签: scala apache-spark

我正在尝试读取系统文件夹的文件。从目录中读取时,我收到以下异常。

Exception in thread "main" java.io.IOException: No FileSystem for scheme: null
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2421)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2428)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:88)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2467)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2449)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:367)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:287)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$14.apply(DataSource.scala:372)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$14.apply(DataSource.scala:370)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.immutable.List.foreach(List.scala:381)
at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
at scala.collection.immutable.List.flatMap(List.scala:344)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:370)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:152)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:135)
at org.directory.spark.filter.sparksql$.run(sparksql.scala:47)
at org.directory.spark.filter.WisilicaSanitizerDataDriver$$anonfun$main$2.apply(WisilicaSanitizerDataDriver.scala:57)
at org.directory.spark.filter.WisilicaSanitizerDataDriver$$anonfun$main$2.apply(WisilicaSanitizerDataDriver.scala:56)
at scala.Option.map(Option.scala:146)
at org.directory.spark.filter.WisilicaSanitizerDataDriver$.main(WisilicaSanitizerDataDriver.scala:56)
at org.directory.spark.filter.WisilicaSanitizerDataDriver.main(WisilicaSanitizerDataDriver.scala)

这是我的代码。

 while (currentDate.isBefore(endDate) || currentDate.isEqual(endDate)) {
val (inpath_tag,outpath) = buildPaths(currentDate, sc);

val df = sqlContext.read.format("com.databricks.spark.csv")
  .option("header", "false") // Use first line of all files as header
  .option("inferSchema", "true") // Automatically infer data types
  .option("delimiter", ":")
  .load(inpath_tag.toString())
  }

   val inpath_tag = new Path(
    makePath("/", Some("") :: Some("/home/rakshi/workspace1/spark/spark-warehouse/") :: Some(year) :: Some(month) :: Some(day) :: Some(hour) :: Nil))

任何帮助将不胜感激。

0 个答案:

没有答案
相关问题