如何在没有键值对的情况下保存Mapreduce的Reducer输出?

时间:2019-03-25 05:44:29

标签: hadoop mapreduce hdfs

我正在编写一个Mapreduce程序来处理Dicom图像。 这个Mapreduce程序的目的是处理dicom图像,从中提取元数据,索引到solr,最后在Re​​ducer阶段,应将原始图像保存在hdfs中。 我想在HDFS中将相同文件保存为reducer输出

所以我已经实现了大多数功能,但是在reducer阶段,当将相同文件存储在hdfs中时,它不起作用。

我已经用dicom图像查看器测试了处理过的Dicom文件,它说文件已缩小,并且处理过的dicom文件的大小略有增加。 例如。原始Dicom大小为628Kb,当reducer将该文件保存在hdfs中时,大小更改为630Kb。

我尝试了这些链接的解决方案,但没有一个能提供预期的结果。

Hadoop mapReduce How to store only values in HDFS

Hadoop - How to Collect Text Output Without Values

这是将Dicom文件作为一个文件读取(不分割)的代码。

public class WholeFileInputFormat extends FileInputFormat<NullWritable, BytesWritable>{

    @Override
    protected boolean isSplitable(JobContext context, Path filename) {
        return false;
    }

    @Override
    public RecordReader<NullWritable, BytesWritable> createRecordReader(InputSplit split, TaskAttemptContext context)
            throws IOException, InterruptedException {
        WholeFileRecordReader reader = new WholeFileRecordReader();
        reader.initialize(split, context);
        return reader;
    }       
}

自定义RecordReader

public class WholeFileRecordReader extends RecordReader<NullWritable, BytesWritable>{

    private FileSplit fileSplit;
    private Configuration conf;
    private BytesWritable value = new BytesWritable();
    private boolean processed = false;

    @Override
    public void initialize(InputSplit split, TaskAttemptContext context) throws IOException, InterruptedException {     
        this.fileSplit = (FileSplit) split;
        this.conf = context.getConfiguration();     
    }

    @Override
    public boolean nextKeyValue() throws IOException, InterruptedException {
        if (!processed) {
            byte[] contents = new byte[(int) fileSplit.getLength()];
            System.out.println("Inside nextKeyvalue");
            System.out.println(fileSplit.getLength());
            Path file = fileSplit.getPath();
            FileSystem fs = file.getFileSystem(conf);
            FSDataInputStream in = null;
            try {
                in = fs.open(file);
                IOUtils.readFully(in, contents, 0, contents.length);
                value.set(contents, 0, contents.length);
            } finally {
                IOUtils.closeStream(in);
            }
                processed = true;
                return true;
            }
            return false;
    }

    @Override
    public void close() throws IOException {

    }

    @Override
    public NullWritable getCurrentKey() throws IOException, InterruptedException 
    {
        return NullWritable.get();
    }

    @Override
    public BytesWritable getCurrentValue() throws IOException, InterruptedException {
        return value;
    }

    @Override
    public float getProgress() throws IOException, InterruptedException {
        return processed ? 1.0f : 0.0f;
    }

}

映射器类 映射器类可以根据我们的需求完美运行。

public class MapClass{

    public static class Map extends Mapper<NullWritable, BytesWritable, Text, BytesWritable>{   

        @Override
        protected void map(NullWritable key, BytesWritable value,
                Mapper<NullWritable, BytesWritable, Text, BytesWritable>.Context context)
                throws IOException, InterruptedException {
            value.setCapacity(value.getLength());
            InputStream in = new ByteArrayInputStream(value.getBytes());            
            ProcessDicom.metadata(in); // Process dicom image and extract metadata from it
            Text keyOut = getFileName(context);
            context.write(keyOut, value);

        }

        private Text getFileName(Mapper<NullWritable, BytesWritable, Text, BytesWritable>.Context context)
        {
            InputSplit spl = context.getInputSplit();
            Path filePath = ((FileSplit)spl).getPath();
            String fileName = filePath.getName();
            Text text = new Text(fileName);
            return text;
        }

        @Override
        protected void setup(Mapper<NullWritable, BytesWritable, Text, BytesWritable>.Context context)
                throws IOException, InterruptedException {
            super.setup(context);
        }

    }

减速器类 这是减速器类。     公共类ReduceClass {

    public static class Reduce extends Reducer<Text, BytesWritable, BytesWritable, BytesWritable>{

        @Override
            protected void reduce(Text key, Iterable<BytesWritable> value,
                    Reducer<Text, BytesWritable, BytesWritable, BytesWritable>.Context context)
                    throws IOException, InterruptedException {

            Iterator<BytesWritable> itr = value.iterator();
            while(itr.hasNext())
            {
                BytesWritable wr = itr.next();
                wr.setCapacity(wr.getLength());
                context.write(new BytesWritable(key.copyBytes()), itr.next());
            }
        }
}

主班

public class DicomIndexer{

    public static void main(String[] argss) throws Exception{
        String args[] = {"file:///home/b3ds/storage/dd","hdfs://192.168.38.68:8020/output"};
        run(args);
    }

    public static void run(String[] args) throws Exception {

        //Initialize the Hadoop job and set the jar as well as the name of the Job
        Configuration conf = new Configuration();
        Job job = new Job(conf, "WordCount");
        job.setJarByClass(WordCount.class);
//      job.getConfiguration().set("mapreduce.output.basename", "hi");
        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(BytesWritable.class);
        job.setOutputKeyClass(BytesWritable.class);
        job.setOutputValueClass(BytesWritable.class);

        job.setMapperClass(Map.class);
        job.setCombinerClass(Reduce.class);
        job.setReducerClass(Reduce.class);
        job.setInputFormatClass(WholeFileInputFormat.class);
        job.setOutputFormatClass(SequenceFileOutputFormat.class);

        WholeFileInputFormat.setInputPaths(job, new Path(args[0]));
        FileOutputFormat.setOutputPath(job, new Path(args[1]));

        job.waitForCompletion(true);

    }

}

所以我完全不知所措。某些链接说不可能,因为Mapreduce可以成对使用,而有些则说使用NullWritable。到目前为止,我已经尝试过NullWritable,SequenceFileOutputFormat,但是它们都不起作用。

1 个答案:

答案 0 :(得分:1)

两件事:

  1. 通过两次调用itr.next()会无意间在化简器中一次消耗两个元素。

  2. 您已经确定,只想写一个键和一个值就可以了。而是使用NullWritable作为值。您的减速器看起来像:

    public static class Reduce extends Reducer<Text, BytesWritable, BytesWritable, NullWritable>{
        @Override
        protected void reduce(Text key, Iterable<BytesWritable> value,
                              Reducer<Text, BytesWritable, BytesWritable, NullWritable>.Context context)
                throws IOException, InterruptedException {
            NullWritable nullWritable = NullWritable.get();
            Iterator<BytesWritable> itr = value.iterator();
            while(itr.hasNext())
            {
                BytesWritable wr = itr.next();
                wr.setCapacity(wr.getLength());
                context.write(wr, nullWritable);
            }
        }
    }