Oozie Spark动作(包含HiveContext),给java.lang.OutOfMemoryError:PermGen空间

时间:2019-01-19 15:36:19

标签: apache-spark sbt out-of-memory oozie hivecontext

我正试图在Oozie中运行一个spark-scala自包含应用程序。 请注意,我正在使用具有20G RAM的CDH5.13 Quickstart VM(包含Cloudera Manager,HUE ...,并且我将Java从7升级到8)。

代码几乎不执行任何操作,只是创建HiveContext,然后创建Hive表:

import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.rdd.RDD

object ThirdApp {
def main(args: Array[String]) {
val conf = new SparkConf().setAppName("Third Application")
val sc = new SparkContext(conf)

val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc)
import sqlContext.implicits._
sqlContext.sql("CREATE TABLE IF NOT EXISTS default.src (key INT, value STRING)")
}
}

sbt文件:

name := "Third Project"
version := "1.0"
scalaVersion := "2.10.5"
libraryDependencies ++= Seq("org.apache.spark" %% "spark-core" % "1.6.0",
 "org.apache.spark" %% "spark-hive"  % "1.6.0")

当我提交它(在shell中)时,该应用程序运行良好,并且创建了Hive表。 但是当我在oozie中运行相同的应用程序时,它出现了内存问题。

请注意,我曾经在oozie中运行spark应用,并且除了包含hiveContext的此用例之外,它们都工作正常。

这是工作流程.xml:

<workflow-app name="spark-scala" xmlns="uri:oozie:workflow:0.5">
    <start to="spark-5a6a"/>
    <kill name="Kill">
        <message>Action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
    </kill>
    <action name="spark-5a6a">
        <spark xmlns="uri:oozie:spark-action:0.2">
            <job-tracker>${jobTracker}</job-tracker>
            <name-node>${nameNode}</name-node>
            <master>local</master>
            <mode>client</mode>
            <name>MySpark</name>
              <class>ThirdApp</class>
            <jar>third-project_2.10-1.0.jar</jar>
            <file>/user/cloudera/oozie-spark/third-project_2.10-1.0.jar#third-project_2.10-1.0.jar</file>
        </spark>
        <ok to="End"/>
        <error to="Kill"/>
    </action>
    <end name="End"/>
</workflow-app>

这是job.properties:

oozie.use.system.libpath=True
send_email=False
dryrun=False
nameNode=hdfs://quickstart.cloudera:8020
jobTracker=quickstart.cloudera:8032
security_enabled=False

请注意,我从 Cloudera Manager>类别>安全性>超级用户组添加了spark超级用户组,以避免权限问题:

Adding spark to superuser group (Cloudera Manager View)

hive-site.xml view

标准输出日志:

Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.SparkMain], exception invoking main(), PermGen space

ERROR org.apache.hadoop.mapred.YarnChild  - Error running child : java.lang.OutOfMemoryError: PermGen space

WARN  org.apache.hadoop.ipc.Client  - Unexpected error reading responses on connection Thread[IPC Client (1722336150) connection to /127.0.0.1:59738 from job_1547905343759_0002,5,main]

java.lang.OutOfMemoryError: PermGen space

INFO  org.apache.hadoop.mapred.Task  - Communication exception: java.io.IOException: The client is stopped

ERROR org.apache.hadoop.yarn.YarnUncaughtExceptionHandler  - Thread Thread[main,5,main] threw an Error.

stderr日志:

Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.SparkMain], exception invoking main(), PermGen space
Halting due to Out Of Memory Error...

Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "main"

系统日志:

INFO [main] org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MapTask metrics system started
NFO [main] org.apache.hadoop.mapred.YarnChild: Executing with tokens:
INFO [main] org.apache.hadoop.mapred.YarnChild: Kind: mapreduce.job, Service: job_1547905343759_0002, Ident: (org.apache.hadoop.mapreduce.security.token.JobTokenIdentifier@3a06520)
INFO [main] org.apache.hadoop.mapred.YarnChild: Kind: RM_DELEGATION_TOKEN, Service: 127.0.0.1:8032, Ident: (RM_DELEGATION_TOKEN owner=cloudera, renewer=oozie mr token, realUser=oozie, issueDate=1547907649379, maxDate=1548512449379, sequenceNumber=6, masterKeyId=2)
INFO [main] org.apache.hadoop.mapred.YarnChild: Sleeping for 0ms before retrying again. Got null now.
INFO [main] org.apache.hadoop.mapred.YarnChild: mapreduce.cluster.local.dir for child: /yarn/nm/usercache/cloudera/appcache/application_1547905343759_0002
INFO [main] org.apache.hadoop.conf.Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
INFO [main] org.apache.hadoop.mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
INFO [main] org.apache.hadoop.mapred.MapTask: Processing split: org.apache.oozie.action.hadoop.OozieLauncherInputFormat$EmptySplit@1ab7aa29
NFO [main] org.apache.hadoop.mapred.MapTask: numReduceTasks: 0
INFO [main] org.apache.hadoop.conf.Configuration.deprecation: mapred.job.id is deprecated. Instead, use mapreduce.job.id
NFO [main] org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at quickstart.cloudera/127.0.0.1:8032
INFO [main] org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at quickstart.cloudera/127.0.0.1:8032

我还在 Cloudera Manager>日志>错误中寻找日志:

Exception in doCheckpoint
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RetriableException): NameNode still not started
...(more)

Error starting JobHistoryServer
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Error creating done directory: [hdfs://quickstart.cloudera:8020/user/history/done]
...
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RetriableException): NameNode still not started
...(more)

SERVER[quickstart.cloudera] USER[-] GROUP[-] TOKEN[] APP[-] JOB[0000001-190120120522295-oozie-oozi-W] ACTION[0000001-190120120522295-oozie-oozi-W@spark-5a6a] XException, 
org.apache.oozie.command.CommandException: E0800: Action it is not running its in [KILLED] state, action [0000001-190120120522295-oozie-oozi-W@spark-5a6a]
    at org.apache.oozie.command.wf.CompletedActionXCommand.eagerVerifyPrecondition(CompletedActionXCommand.java:92)
    at org.apache.oozie.command.XCommand.call(XCommand.java:257)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

getting attribute DatanodeNetworkCounts of Hadoop:service=DataNode,name=DataNodeInfo threw an exception
javax.management.RuntimeMBeanException: java.lang.NullPointerException
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrow(DefaultMBeanServerInterceptor.java:839)
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrowMaybeMBeanException(DefaultMBeanServerInterceptor.java:852)
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:651)
    at com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
    at org.apache.hadoop.jmx.JMXJsonServlet.writeAttribute(JMXJsonServlet.java:342)
...More

以下是日志的(大约)完整视图:

/var/log/spark/...log

/var/log/hadoop-hdfs/...log.out

我试图通过以下方法解决这些问题:

增加mapred-site.xml中用于map / reduce的内存:

  <property>
    <name>mapreduce.map.memory.mb</name>
    <value>2128</value>
  </property>
  <property>
  <property>
    <name>mapreduce.reduce.memory.mb</name>
    <value>2128</value>
  </property>
  <property>
  <property>
    <name>yarn.app.mapreduce.am.resource.mb</name>
    <value>2128</value>
  </property>

Global View of mapred-site.xml

我还尝试增加Java堆: View of Java Heap in Cloudera Manager

我也尝试设置网关默认组: View of Client Java Configuration Options

并且我尝试在工作流程中添加“选项”列表,内容为:-驱动程序内存5G

但是总是会出现相同的错误。 您能帮忙吗?

1 个答案:

答案 0 :(得分:0)

我不确定内存问题-但是我看到了“权限被拒绝”问题 由于某些原因,文件夹'/ user / spark / applicationHistory / local-1547821006998'由用户'cloudera'拥有,而不是spark,因此spark无法对其进行写入。 要解决此问题,请登录到VM并将组超级组添加到用户spark: “ usermod -G超组saprk” 欢呼,多伦