Spark RDD中的多个分区

时间:2015-04-13 13:03:14

标签: scala playframework apache-spark rdd apache-spark-sql

所以我试图在Play / Scala项目中使用Spark从MySQL数据库中获取数据。由于我尝试接收的行数量巨大,我的目标是从spark rdd获取Iterator。这是Spark上下文和配置......

  private val configuration = new SparkConf()
    .setAppName("Reporting")
    .setMaster("local[*]")
    .set("spark.executor.memory", "2g")
    .set("spark.akka.timeout", "5")
    .set("spark.driver.allowMultipleContexts", "true")

  val sparkContext = new SparkContext(configuration)

JDBCRDD与sql查询一起如下

val query =
  """
    |SELECT id, date
    |FROM itembid
    |WHERE date BETWEEN ? AND ?
  """.stripMargin


val rdd = new JdbcRDD[ItemLeadReportOutput](SparkProcessor.sparkContext,
      driverFactory,
      query,
      rangeMinValue.get,
      rangeMaxValue.get,
      partitionCount,
      rowMapper)
      .persist(StorageLevel.MEMORY_AND_DISK)

数据太多,无法立刻获得。在较小的数据集开始时,可​​以从rdd.toLocalIterator获取迭代器。但是在这种特定情况下,它无法计算迭代器。所以我的目标是逐个部署多个分区和接收数据。我一直在收到错误。这样做的正确方法是什么?

1 个答案:

答案 0 :(得分:1)

我相信你正面临读取MySQL表的堆问题。

我要做的是将数据从MySQL提取到存储系统(HDFS,本地)文件中,然后我将使用spark的上下文textFile来获取它!

示例:

object JDBCExample {

  def main(args: Array[String]) {
    val driver = "com.mysql.jdbc.Driver"
    val url = "jdbc:mysql://localhost/database"
    val username = "user"
    val password = "pass"

    var connection: Connection = null

    try {
      Class.forName(driver)
      connection = DriverManager.getConnection(url, username, password)

      // This is the tricky part of reading a huge MySQL table you'll need to set your sql statement as following :
      val statement = connection.createStatement(java.sql.ResultSet.TYPE_FORWARD_ONLY, java.sql.ResultSet.CONCUR_READ_ONLY)
      statement.setMaxRows(0)
      statement.setFetchSize(Integer.MIN_VALUE)

      val resultSet = statement.executeQuery("select * from ex_table")

      val fileWriter = new FileWriter("output.csv")
      val writer = new CSVWriter(fileWriter, '\t');

      while (resultSet.next()) {
        val entries = List(... // process result here //...)
        writer.writeNext(entries.toArray)
      }
      writer.close();

    } catch {
      case e: Throwable => e.printStackTrace
    }
    connection.close()
  }
}

存储数据后,您可以阅读:

val data = sc.textFile("output.csv")

PS:我在代码中使用了一些快捷方式(每个示例中包含CSVWriter),但您可以将其用作您想要执行的操作的骨架!

相关问题