使用sparkjobserver创建sparkSQL上下文时的上下文错误

时间:2015-06-26 16:43:40

标签: apache-spark apache-spark-sql spark-jobserver

当我运行
curl -d "" 'localhost:8090/contexts/test-context?num-cpu-cores=4&memory-per-node=512m'时,它使sparkContext没有问题,但是当我想创建一个sparkSQL上下文时,我得到一个错误,我使用这一行来使它成为curl -d "" '127.0.0.1:8090/contexts/sql-context?context-factory=spark.jobserver.context.SQLContextFactory'
这是它给出的回应 { "status": "CONTEXT INIT ERROR", "result": { "message": "", "errorClass": "java.lang.ClassNotFoundException", "stack": ["java.net.URLClassLoader$1.run(URLClassLoader.java:366)", "java.net.URLClassLoader$1.run(URLClassLoader.java:355)", "java.security.AccessController.doPrivileged(Native Method)", "java.net.URLClassLoader.findClass(URLClassLoader.java:354)", "java.lang.ClassLoader.loadClass(ClassLoader.java:425)", "java.lang.ClassLoader.loadClass(ClassLoader.java:358)", "spark.jobserver.JobManagerActor.createContextFromConfig(JobManagerActor.scala:265)", "spark.jobserver.JobManagerActor$$anonfun$wrappedReceive$1.applyOrElse(JobManagerActor.scala:106)", "scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)", "scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)", "scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)", "ooyala.common.akka.ActorStack$$anonfun$receive$1.applyOrElse(ActorStack.scala:33)", "scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)", "scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)", "scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)", "ooyala.common.akka.Slf4jLogging$$anonfun$receive$1$$anonfun$applyOrElse$1.apply$mcV$sp(Slf4jLogging.scala:26)", "ooyala.common.akka.Slf4jLogging$class.ooyala$common$akka$Slf4jLogging$$withAkkaSourceLogging(Slf4jLogging.scala:35)", "ooyala.common.akka.Slf4jLogging$$anonfun$receive$1.applyOrElse(Slf4jLogging.scala:25)", "scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)", "scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)", "scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)", "ooyala.common.akka.ActorMetrics$$anonfun$receive$1.applyOrElse(ActorMetrics.scala:24)", "akka.actor.Actor$class.aroundReceive(Actor.scala:465)", "ooyala.common.akka.InstrumentedActor.aroundReceive(InstrumentedActor.scala:8)", "akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)", "akka.actor.ActorCell.invoke(ActorCell.scala:487)", "akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)", "akka.dispatch.Mailbox.run(Mailbox.scala:220)", "akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)", "scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)", "scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)", "scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)", "scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)"] }
如果我也使用HiveContextFactory也会发生同样的事情,即使我的卷曲像这样curl -d "" '127.0.0.1:8090/contexts/sql-context?context-factory'

,也会出现此错误

1 个答案:

答案 0 :(得分:0)

您应该使用spark-jobserver建立spark-jobserver-extras。你可以用SBT做到这一点:

$ sbt
> project job-server-extras
> assembly

因此,您应该从jar项目的目标目录中获取job-server-extras,然后您可以使用spark.jobserver.context.SQLContextFactoryHiveContextFactory