Flume sink在hdfs中复制垃圾数据

时间:2016-10-23 08:27:03

标签: hdfs flume flume-ng

在将数据从本地路径复制到HDFS接收器时,我在HDFS位置的文件中收到了一些垃圾数据。

我的水槽配置文件:

# spool.conf: A single-node Flume configuration

# Name the components on this agent
a1.sources = s1
a1.sinks = k1
a1.channels = c1

# Describe/configure the source
a1.sources.s1.type = spooldir  
a1.sources.s1.spoolDir = /home/cloudera/spool_source    
a1.sources.s1.channels = c1   

# Describe the sink
a1.sinks.k1.type = hdfs
a1.sinks.k1.channel = c1
a1.sinks.k1.hdfs.path = flumefolder/events
a1.sinks.k1.hdfs.filetype = Datastream

#Format to be written
a1.sinks.k1.hdfs.writeFormat = Text

# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

我从本地路径" / home / cloudera / spool_source" aopyuing file到hdfs路径" flumefolder / events"。

Flume命令:

flume-ng agent --conf-file spool.conf --name a1 -Dflume.root.logger=INFO,console

文件" salary.txt"在本地路径" / home / cloudera / spool_source"是:

GR1,Emp1,Jan,31,2500
GR3,Emp3,Jan,18,2630
GR4,Emp4,Jan,31,3000
GR4,Emp4,Feb,28,3000
GR1,Emp1,Feb,15,2500
GR2,Emp2,Feb,28,2800
GR2,Emp2,Mar,31,2800
GR3,Emp3,Mar,31,3000
GR1,Emp1,Mar,15,2500
GR2,Emp2,Apr,31,2630
GR3,Emp3,Apr,17,3000
GR4,Emp4,Apr,31,3200
GR7,Emp7,Apr,21,2500
GR11,Emp11,Apr,17,2000

在目标路径" flumefolder / events"中,数据将以垃圾值复制为:

1 W��ȩGR1,Emp1,Jan,31,2500W��ȲGR3,Emp3,Jan,18,2630W��ȷGR4,Emp4,Jan,31,3000W��ȻGR4,Emp4,Feb,28,3000W��ȽGR1,Emp1,Feb,15,2500W����GR2,Emp2,Feb,28,2800W����GR2,Emp2,Mar,31,2800W����GR3,Emp3,Mar,31,3000W����GR1,Emp1,Mar,15,2500W����GR2,Emp2,

我的配置文件spool.conf出了什么问题,我无法弄明白。

1 个答案:

答案 0 :(得分:1)

Flume配置区分大小写,因此将文件类型行更改为fileType,并修复Datastream值,因为它也区分大小写

sinks.k1.hdfs.fileType = DataStream

您当前的设置意味着正在使用序列文件的默认值,因此是奇数字符