Apache kyline多维数据集流构建错误无作业计数器

时间:2019-01-22 12:15:33

标签: apache-spark apache-kafka mapreduce kylin

我正在按照以下教程进行流多维数据集构建
Kylin Cube from Streaming (Kafka)

所有属性均按上述页面中所述进行设置。
但是在触发建立多维数据集时。在第1步从Kafka保存数据中失败
说:

org.apache.kylin.engine.mr.exception.MapReduceException: no counters for job job_1547096967734_0086
at org.apache.kylin.engine.mr.common.MapReduceExecutable.doWork(MapReduceExecutable.java:173)
at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:164)
at org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:70)
at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:164)
at org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:114)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

我见过Apache kylin cube fails “no counters for job”
但是用例是用于普通的多维数据集构建,而不是通过kafka多维数据集构建流式传输。


mapred-root-historyserver.log 中,以下条目似乎没有帮助。

2019-01-22 11:33:15,557 INFO org.apache.hadoop.mapreduce.v2.hs.CompletedJob: 
Loading job: job_1547096967734_0087 from file: 
hdfs://localhost:9000/tmp/hadoop- 
yarn/staging/history/done_intermediate/root/job_1547096967734_0087- 
1548149562328-root-Kylin_Save_Kafka_Data_kylin_streaming_cube_Step- 
1548149585065-0-0-FAILED-default-1548149566816.jhist
2019-01-22 11:33:15,557 INFO org.apache.hadoop.mapreduce.v2.hs.CompletedJob: 
Loading history file: [hdfs://localhost:9000/tmp/hadoop- 
yarn/staging/history/done_intermediate/root/job_1547096967734_0087- 
1548149562328-root-Kylin_Save_Kafka_Data_kylin_streaming_cube_Step- 
1548149585065-0-0-FAILED-default-1548149566816.jhist]
2019-01-22 11:33:15,572 INFOorg.apache.hadoop.mapreduce.jobhistory.
JobSummary:jobId=job_1547096967734_0087,submitTime=1548149562328
,launchTime=1548149566816,firstMapTaskLaunchTime=1548149570064,
firstReduceTaskLaunchTime=0,finishTime=1548149585065,resourcesPerMap
=1024,resourcesPerReduce=0,numMaps=1,numReduces=0,user=root,queue=
default,status=FAILED,mapSlotSeconds=8,reduceSlotSeconds=0,jobName=
Kylin_Save_Kafka_Data_kylin_streaming_cube_Step
2019-01-22 11:33:15,572 INFO org.apache.hadoop.mapreduce.v2.hs.
HistoryFileManager: Deleting JobSummary file: [hdfs://localhost:9000/
tmp/hadoop-yarn/staging/history/done_intermediate/
root/job_1547096967734_0087.summary]
2019-01-22 11:33:15,574 INFO 
org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager: Moving 
hdfs://localhost:9000/tmp/hadoop- 
yarn/staging/history/done_intermediate/root/job_1547096967734_0087- 
1548149562328-root-Kylin_Save_Kafka_Data_kylin_streaming_cube_Step- 
1548149585065-0-0-FAILED-default-1548149566816.jhist to 
hdfs://localhost:9000/tmp/hadoop- 
yarn/staging/history/done/2019/01/22/000000/job_1547096967734_0087- 
1548149562328-root-Kylin_Save_Kafka_Data_kylin_streaming_cube_Step- 
1548149585065-0-0-FAILED-default-1548149566816.jhist
2019-01-22 11:33:15,574 INFO 
org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager: Moving 
hdfs://localhost:9000/tmp/hadoop- 
yarn/staging/history/done_intermediate/root/job_1547096967734_0087_conf.xml 
to hdfs://localhost:9000/tmp/hadoop- 
yarn/staging/history/done/2019/01/22/000000/job_1547096967734_0087_conf.xml
2019-01-22 11:35:30,160 INFO org.apache.hadoop.mapreduce.v2.hs.JobHistory: 
Starting scan to move intermediate done files

这是一个完全手动安装的麒麟环境,下面是版本规格:

apache-hive-2.3.4-bin
apache-kylin-2.5.2-bin-hbase1x
hadoop-2.9.1
hbase-1.4.9
kafka_2.11-2.0.0
spark-2.3.2-bin-hadoop2.7
zookeeper-3.4.13

任何帮助将不胜感激。

3 个答案:

答案 0 :(得分:0)

您的环境似乎有问题。您可以检查错误消息的更多日志。您最好参考最新的文档http://kylin.apache.org/docs/tutorial/cube_streaming.html。如果您想快速启动Kylin。建议您试用Kylin或使用集成的沙箱(例如HDP沙箱)开发它,并确保它至少有10 GB的内存。

答案 1 :(得分:0)

请检查MR作业,以了解纱线的第一步。在工作中,您可以深入研究每个映射器的日志,然后您应该能够在其中看到一些异常。通常,可能的原因包括“无法与Kafka连接”,“无法加载Kafka客户端jar”等。

答案 2 :(得分:0)

我们可以通过在yarn share lib中提供kafka-client-2.0.0.jar来修复它。如mapreduce作业日志所述,找不到kafka的class def。

相关问题