Gobblin Kafka到HDFS gobblin-api - ***。jar FileNotFoundException

时间:2017-10-19 04:01:56

标签: apache-kafka

我想收集kafka消息并通过gobblin将其存储在hdfs中, 当我运行gobblin-mapreduce.sh时,脚本会抛出异常:



2017-10-19 11:49:18 CST ERROR [main] gobblin.runtime.AbstractJobLauncher  442 - Failed to launch and run job job_GobblinKafkaQuickStart_1508384954897: java.io.FileNotFoundException: File doe    s not exist: hdfs://localhost:9000/Users/fanjun/plugin/gobblin-dist/lib/gobblin-api-0.9.0-642-g13a21ad.jar
111 java.io.FileNotFoundException: File does not exist: hdfs://localhost:9000/Users/fanjun/plugin/gobblin-dist/lib/gobblin-api-0.9.0-642-g13a21ad.jar
112     at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1116)
113     at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1108)
114     at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
115     at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1108)
116     at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:288)
117     at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:224)
118     at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:99)
119     at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(ClientDistributedCacheManager.java:57)
120     at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:265)
121     at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:301)
122     at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:389)
123     at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
124     at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
125     at java.security.AccessController.doPrivileged(Native Method)
126     at javax.security.auth.Subject.doAs(Subject.java:422)
127     at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
128     at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
129     at gobblin.runtime.mapreduce.MRJobLauncher.runWorkUnits(MRJobLauncher.java:230)
130     at gobblin.runtime.AbstractJobLauncher.runWorkUnitStream(AbstractJobLauncher.java:570)
131     at gobblin.runtime.AbstractJobLauncher.launchJob(AbstractJobLauncher.java:417)
132     at gobblin.runtime.mapreduce.CliMRJobLauncher.launchJob(CliMRJobLauncher.java:89)
133     at gobblin.runtime.mapreduce.CliMRJobLauncher.run(CliMRJobLauncher.java:66)
134     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
135     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
136     at gobblin.runtime.mapreduce.CliMRJobLauncher.main(CliMRJobLauncher.java:111)
137     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
138     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
139     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
140     at java.lang.reflect.Method.invoke(Method.java:498)
141     at org.apache.hadoop.util.RunJar.main(RunJar.java:212)




路径" /Users/fanjun/plugin/gobblin-dist/lib/gobblin-api-0.9.0-642-g13a21ad.jar"在我的本地磁盘上,而不是在hdfs中,所以hdfs uri找不到它是合理的。 为什么这个脚本想从hdfs加载gobblin-api.jar,而不是从本地磁盘加载?

这是我的工作配置文件:



job.name=GobblinKafkaQuickStart
job.group=GobblinKafka
job.description=Gobblin quick start job for Kafka
job.lock.enabled=false

kafka.brokers=10.0.35.148:9092

source.class=gobblin.source.extractor.extract.kafka.KafkaSimpleSource
extract.namespace=gobblin.extract.kafka

writer.builder.class=gobblin.writer.SimpleDataWriterBuilder
writer.file.path.type=tablename
writer.destination.type=HDFS
writer.output.format=txt

data.publisher.type=gobblin.publisher.BaseDataPublisher

mr.job.max.mappers=1

metrics.reporting.file.enabled=true
metrics.log.dir=/gobblin-kafka/metrics
metrics.reporting.file.suffix=txt

bootstrap.with.offset=earliest

fs.uri=hdfs://localhost:9000
writer.fs.uri=hdfs://localhost:9000
state.store.fs.uri=hdfs://localhost:9000

mr.job.root.dir=/gobblin-kafka/working
state.store.dir=/gobblin-kafka/state-store
task.data.root.dir=/jobs/kafkaetl/gobblin/gobblin-kafka/task-data
data.publisher.final.dir=/gobblintest/job-output




2 个答案:

答案 0 :(得分:1)

您是否考虑过使用Kafka Connect(Apache Kafka的一部分)和HDFS connector

答案 1 :(得分:-1)

您是否尝试使用跟随./bin/gobblin-standalone.sh start --conffile YourJob.job --workdir ./的命令来运行作业?

我不知道gobblin-api-xx但我用这个命令在hadoop中存储了kafka主题。

我的工作

job.name=KafkaHDFSTransmitter
job.group=GobblinKafka
job.description=Gobblin Kafka to HDFS
job.lock.enabled=false
mr.job.max.mappers=1
mr.job.root.dir=/gobblin-kafka/working
kafka.brokers=KAFKA_ADRESS:PORT
topic.whitelist=TOPIC_TO_PULL
source.class=org.apache.gobblin.source.extractor.extract.kafka.KafkaSimpleSource
extract.namespace=gobblin.extract.kafka
bootstrap.with.offset=earliest
writer.builder.class=org.apache.gobblin.writer.SimpleDataWriterBuilder
writer.file.path.type=tablename
writer.destination.type=HDFS
writer.output.format=txt
simple.writer.delimiter=\n
fs.uri=hdfs://HADOOP_ADRESS:PORT
writer.fs.uri=hdfs://HADOOP_ADRESS:PORT
state.store.fs.uri=hdfs://HADOOP_ADRESS:PORT
state.store.dir=/gobblin-kafka/state-store
task.data.root.dir=/jobs/kafkaetl/gobblin/gobblin-kafka/task-data
data.publisher.final.dir=/dmp-data
metrics.reporting.file.enabled=true
metrics.log.dir=/gobblin-kafka/metrics
metrics.reporting.file.suffix=txt`
相关问题