我们可以单独使用snappy还是需要与Hadoop结合使用?

时间:2019-08-16 03:45:03

标签: flume snappy

我想使用snappy来用Java压缩文件。我已经下载了必需的本机库,并向包含 libsnappy.so.1.1.4 文件的文件夹提供了 java.library.path 。但是我仍然遇到以下异常。

| ERROR | [SinkRunner-PollingRunner-DefaultSinkProcessor] | com.omnitracs.otda.dte.flume.sink.
hdfs.HDFSEventSink:process(463): process failed org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z
java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z
        at org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy(Native Method) ~[hadoop-common-2.8.5.jar:?]
        at org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:63) ~[hadoop-common-2.8.5.jar:?]
        at org.apache.hadoop.io.compress.SnappyCodec.getCompressorType(SnappyCodec.java:136) ~[hadoop-common-2.8.5.jar:?]
        at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:150) ~[hadoop-common-2.8.5.jar:?]
        at org.apache.flume.sink.hdfs.HDFSCompressedDataStream.open(HDFSCompressedDataStream.java:97) ~[flume-hdfs-sink-1.9.0.jar:1.9.0]

0 个答案:

没有答案
相关问题