RollingSink中的Flink Avro Parquet作家

时间:2016-12-14 14:08:49

标签: avro apache-flink parquet

当我尝试在RollingSink中设置AvroParquetWriter时出现问题, 下沉路径和编写器路径似乎存在冲突

  • flink version:1.1.3
  • parquet-avro version:1.8.1

错误:

[...]
12/14/2016 11:19:34 Source: Custom Source -> Sink: Unnamed(8/8) switched to CANCELED
INFO  JobManager - Status of job af0880ede809e0d699eb69eb385ca204 (Flink Streaming Job) changed to FAILED.
java.lang.RuntimeException: Could not forward element to next operator
    at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:376)
    at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:358)
    at org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:346)
    at org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:329)
    at org.apache.flink.streaming.api.operators.StreamSource$NonTimestampContext.collect(StreamSource.java:161)
    at org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher.emitRecord(AbstractFetcher.java:225)
    at org.apache.flink.streaming.connectors.kafka.internal.Kafka09Fetcher.run(Kafka09Fetcher.java:253)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.fs.FileAlreadyExistsException: File already exists: /home/user/data/file
    at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:264)
    at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:257)
    at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.<init>(ChecksumFileSystem.java:386)
    at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:447)
    at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:426)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:906)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:887)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:784)
    at org.apache.parquet.hadoop.ParquetFileWriter.<init>(ParquetFileWriter.java:223)
    at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:266)
    at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:217)
    at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:183)
    at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:153)
    at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:119)
    at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:92)
    at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:66)
    at org.apache.parquet.avro.AvroParquetWriter.<init>(AvroParquetWriter.java:54)
    at fr.test.SpecificParquetWriter.open(SpecificParquetWriter.java:28) // line in code => writer = new AvroParquetWriter(new Path("/home/user/data/file"), schema, compressionCodecName, blockSize, pageSize);
    at org.apache.flink.streaming.connectors.fs.RollingSink.openNewPartFile(RollingSink.java:451)
    at org.apache.flink.streaming.connectors.fs.RollingSink.invoke(RollingSink.java:371)
    at org.apache.flink.streaming.api.operators.StreamSink.processElement(StreamSink.java:39)
    at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:373)
    ... 7 more
INFO  JobClientActor - 12/14/2016 11:19:34  Job execution switched to status FAILED.
12/14/2016 11:19:34 Job execution switched to status FAILED.
INFO  JobClientActor - Terminate JobClientActor.
[...]

主要:

RollingSink sink = new RollingSink<String>("/home/user/data");
sink.setBucketer(new DateTimeBucketer("yyyy/MM/dd"));
sink.setWriter(new SpecificParquetWriter());
stream.addSink(sink);

SpecificParquetWriter:

public class SpecificParquetWriter<V> extends StreamWriterBase<V> {

    private transient AvroParquetWriter writer;

    private CompressionCodecName compressionCodecName = CompressionCodecName.SNAPPY;
    private int blockSize = ParquetWriter.DEFAULT_BLOCK_SIZE;
    private int pageSize = ParquetWriter.DEFAULT_PAGE_SIZE;

    public static final String USER_SCHEMA = "{"
            + "\"type\":\"record\","
            + "\"name\":\"myrecord\","
            + "\"fields\":["
            + "  { \"name\":\"str1\", \"type\":\"string\" },"
            + "  { \"name\":\"str2\", \"type\":\"string\" },"
            + "  { \"name\":\"int1\", \"type\":\"int\" }"
            + "]}";

    public SpecificParquetWriter(){

    }

    @Override
    // workaround
    public void open(FileSystem fs, Path path) throws IOException {
        super.open(fs, path);
        Schema schema = new Schema.Parser().parse(USER_SCHEMA);

        writer = new AvroParquetWriter(new Path("/home/user/data/file"), schema, compressionCodecName, blockSize, pageSize);
    }

    @Override
    public void write(Object element) throws IOException {
        if(writer != null)
            writer.write(element);
    }

    @Override
    public Writer duplicate() {
        return new SpecificParquetWriter();
    }
}

我不知道我是不是以正确的方式做到了......

有一种简单的方法吗?

1 个答案:

答案 0 :(得分:2)

对于Bucketing Sink,在RollingSink或StreamBaseWriter的情况下,这是基类的问题,因为它们只接受可以处理OutputStream的Writer,而不是保存它们自己的。

writer= new AvroKeyValueWriter<K, V>(keySchema, valueSchema, compressionCodec, streamObject);

而AvroParquetWriter或ParquetWriter接受filePath

writer = AvroParquetWriter.<V>builder(new Path("filePath")) .withCompressionCodec(CompressionCodecName.SNAPPY) .withSchema(schema).build();

我深入了解ParquetWriter并意识到我们正在尝试做的事情没有意义,因为Flink像风暴一样的事件处理系统无法将单个记录写入镶木地板,而火花流可以因为它适用于MicroBatch原理。

使用Storm和Trident我们仍然可以编写镶木地板文件,但是使用FLink我们不能直到flink引入像MicroBatches这样的东西。

因此,对于这种类型的用例,Spark Streaming是更好的选择。

如果想使用Flink,可以进行批处理。