Kafka Connect:尝试写入HDFS时出现ConnectException错误

时间:2018-06-06 13:48:10

标签: hadoop apache-kafka hdfs apache-kafka-connect confluent

我正在尝试将数据从Kafka主题流式传输到HDFS。我的配置文件如下所示:

name=hdfs-sink
connector.class=io.confluent.connect.hdfs.HdfsSinkConnector
tasks.max=1
topics=hdfs_test
#topics.dir=/path/to/hdfs/
#store.url=hdfs://localhost:9000

hadoop.conf.dir=/etc/hadoop/conf/
hadoop.home=/path/to/hdfs/

#Top level directory to store the data ingested from Kafka.
#topics.dir

# Kerberos activated?
hdfs.authentication.kerberos=true
# The principal to use when HDFS is using Kerberos to for authentication.
connect.hdfs.principal=user@fqdn.com
# The path to the keytab file for the HDFS connector principal. This keytab file should only be readable by the connector user
connect.hdfs.keytab= /tmp/mykey.keytab
# Principal for HDFS Namenode
hdfs.namenode.principal=hdfs/fqdn@principal.COM
#The period in milliseconds to renew the Kerberos ticket
#kerberos.ticket.renew.period.ms=3600000

#The format class to use when writing data to the store.
#format.class=io.confluent.connect.hdfs.avro.AvroFormat
#Number of records written to store before invoking file commits.
flush.size=3

但是,我收到错误消息ConnectException

[2018-06-06 15:10:44,191] INFO Login successful for user user@domain.COM using keytab file /tmp/mykey.keytab (org.apache.hadoop.security.UserGroupInformation:966)
[2018-06-06 15:10:44,192] INFO Login as: user@domain.COM (io.confluent.connect.hdfs.DataWriter:173)
[2018-06-06 15:10:44,192] INFO Starting the Kerberos ticket renew thread with period 3600000ms. (io.confluent.connect.hdfs.DataWriter:203)
[2018-06-06 15:10:44,192] INFO Couldn't start HdfsSinkConnector: (io.confluent.connect.hdfs.HdfsSinkTask:90)
org.apache.kafka.connect.errors.ConnectException: java.lang.reflect.InvocationTargetException 

知道什么是错的吗?

0 个答案:

没有答案
相关问题