无法通过Spark连接到Mongo DB

时间:2017-07-17 11:06:32

标签: mongodb python-2.7 apache-spark pyspark apache-spark-sql

我试图通过Apache Spark master从Mongo DB读取数据。

我使用3台机器:

  • M1 - 上面有一个Mongo数据库实例
  • M2 - 使用Spark Master,使用Mongo连接器,在其上运行
  • M3 - 使用连接到M2的Spark主机的python应用程序

应用程序(M3)正在连接到spark master,如下所示:

_sparkSession = SparkSession.builder.master(masterPath).appName(appName)\
.config("spark.mongodb.input.uri", "mongodb://10.0.3.150/db1.data.coll")\
.config("spark.mongodb.output.uri", "mongodb://10.0.3.150/db1.data.coll").getOrCreate()

应用程序(M3)正在尝试从DB中读取数据:

sqlContext = SQLContext(_sparkSession.sparkContext)
        df = sqlContext.read.format("com.mongodb.spark.sql.DefaultSource").option("uri","mongodb://user:pass@10.0.3.150/db1.data?readPreference=primaryPreferred").load()

但是因为这个例外而失败:

    py4j.protocol.Py4JJavaError: An error occurred while calling o56.load.
: java.lang.ClassNotFoundException: Failed to find data source: com.mongodb.spark.sql.DefaultSource. Please find packages at http://spark.apache.org/third-party-projects.html
        at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:594)
        at org.apache.spark.sql.execution.datasources.DataSource.providingClass$lzycompute(DataSource.scala:86)
        at org.apache.spark.sql.execution.datasources.DataSource.providingClass(DataSource.scala:86)
        at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:325)
        at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:152)
        at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:125)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
        at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
        at py4j.Gateway.invoke(Gateway.java:280)
        at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
        at py4j.commands.CallCommand.execute(CallCommand.java:79)
        at py4j.GatewayConnection.run(GatewayConnection.java:214)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: com.mongodb.spark.sql.DefaultSource.DefaultSource
        at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
        at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$25$$anonfun$apply$13.apply(DataSource.scala:579)
        at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$25$$anonfun$apply$13.apply(DataSource.scala:579)
        at scala.util.Try$.apply(Try.scala:192)
        at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$25.apply(DataSource.scala:579)
        at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$25.apply(DataSource.scala:579)
        at scala.util.Try.orElse(Try.scala:84)
        at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:579)
        ... 16 more

3 个答案:

答案 0 :(得分:7)

Spark无法找到com.mongodb.spark.sql.DefaultSource包,因此会显示错误消息。

一切,其他看起来不错只需要包含Mongo Spark包:

> $SPARK_HOME/bin/pyspark --packages org.mongodb.spark:mongo-spark-connector_2.11:2.2.0

或确保jar文件位于正确的路径上。

请务必查看您的Spark版本所需的Mongo-Spark软件包版本:https://spark-packages.org/package/mongodb/mongo-spark

答案 1 :(得分:3)

我是pyspark用户,这是我的代码的样子,并且可以正常工作:

pyspark中的MongoDB连接配置

from pyspark.sql import SparkSession
spark = SparkSession\
    .builder\
    .master('local')\
    .config('spark.mongodb.input.uri', 'mongodb://user:password@ip.x.x.x:27017/database01.data.coll')\
    .config('spark.mongodb.output.uri', 'mongodb://user:password@ip.x.x.x:27017/database01.data.coll')\
    .config('spark.jars.packages', 'org.mongodb.spark:mongo-spark-connector_2.11:2.3.1')\
    .getOrCreate()

从MongoDB中读取:

df01 = spark.read\
    .format("com.mongodb.spark.sql.DefaultSource")\
    .option("database","database01")\
    .option("collection", "collection01")\
    .load()

写入MongoDB:

df01.write.format("com.mongodb.spark.sql.DefaultSource")\
    .mode("overwrite")\
    .option("database","database01")\
    .option("collection", "collection02")\
    .save()

答案 2 :(得分:1)

我很难配置到CosmosDB(API MongoDB)的Spark连接,因此我决定发布对我有用的代码。

我通过Databricks笔记本使用了Spark 2.4.0。

from pyspark.sql import SparkSession

# Connect to CosmosDB to write on the collection
userName = "userName"
primaryKey = "myReadAndWritePrimaryKey"
host = "ipAddress"
port = "10255"
database = "dbName"
collection = "collectionName"

# Structure the connection
connectionString = "mongodb://{0}:{1}@{2}:{3}/{4}.{5}?ssl=true&replicaSet=globaldb".format(userName, primaryKey, host, port, database, collection)

spark = SparkSession\
    .builder\
    .config('spark.mongodb.input.uri', connectionString)\
    .config('spark.mongodb.output.uri', connectionString)\
    .config('spark.jars.packages', 'org.mongodb.spark:mongo-spark-connector_2.11:2.3.1')\
    .getOrCreate()

# Reading from CosmosDB
df = spark.read\
    .format("com.mongodb.spark.sql.DefaultSource")\
    .option("uri", connectionString)\
    .option("database", database)\
    .option("collection", collection)\
    .load()

# Writing on CosmosDB (Appending new information without replacing documents)
dfToAppendOnCosmosDB.write.format("com.mongodb.spark.sql.DefaultSource")\
    .mode("append")\
    .option("uri", connectionString)\
    .option("replaceDocument", False)\
    .option("maxBatchSize", 100)\
    .option("database", database)\
    .option("collection", collection)\
    .save()

我在link找到了用于配置连接器的选项。

相关问题