hadoop-streaming:将输出写入不同的文件

时间:2011-10-10 16:31:24

标签: hadoop mapreduce hadoop-streaming

这是方案

           Reducer1  
         /  
Mapper - - Reducer2  
         \   
           ReducerN  

在reducer中我想把数据写在不同的文件上,让我们说减速器看起来像

def reduce():  
  for line in sys.STDIN:  
    if(line == type1):
      create_type_1_file(line)
    if(line == type2):
      create_type_2_file(line)
    if(line == type3):
      create_type3_file(line)
      ... and so on  
def create_type_1_file(line):
  # writes to file1  
def create_type2_file(line):
  # writes to file2  
def create_type_3_file(line):
  # write to file 3  

将写入路径视为:

file1 = /home/user/data/file1  
file2 = /home/user/data/file2  
file3 = /home/user/data/file3  

当我在pseudo-distributed mode(machine with one node and hdfs daemons running)中运行时,事情很好,因为所有守护进程都会写入同一组文件

问题: - 如果我在1000台机器的集群中运行它,它们会写入同一组文件吗?在这种情况下我是writing to local filesystem - 有没有更好的方法在hadoop streaming中执行此操作?

谢谢

2 个答案:

答案 0 :(得分:0)

通常,reduce的o / p被写入可靠的存储系统,如HDFS,因为如果其中一个节点发生故障,则与该节点关联的reduce数据将丢失。在Hadoop框架的上下文之外再次运行该特定的reduce任务是不可能的。此外,一旦作业完成,必须针对不同的输入类型合并来自1000个节点的o / p。

HDFS中的并发写入是not supported。可能存在多个Reducer可能正在写入HDFS中的同一文件的情况,这可能会损坏文件。当在单个节点上运行多个reduce任务时,并发写单个本地文件时可能会出现并发问题。

其中一个解决方案是拥有reduce task specific file name,然后将特定输入类型的所有文件组合在一起。

答案 1 :(得分:0)

输出可以使用MultipleOutputs类从Reducer写入多个位置。您可以将file1,file2和file3视为三个文件夹,并将1000个Reducers的输出数据分别写入这些文件夹。

作业提交的使用模式:

 Job job = new Job();

 FileInputFormat.setInputPath(job, inDir);

//outDir is the root path, in this case, outDir="/home/user/data/"
 FileOutputFormat.setOutputPath(job, outDir);

//You have to assign the output formatclass.Using MultipleOutputs in this way will still create zero-sized default output, eg part-00000. To prevent this use LazyOutputFormat.setOutputFormatClass(job, TextOutputFormat.class); instead of job.setOutputFormatClass(TextOutputFormat.class); in your Hadoop job configuration.

LazyOutputFormat.setOutputFormatClass(job, TextOutputFormat.class); 

job.setOutputKeyClass(Text.class);

job.setOutputValueClass(Text.class);

 job.setMapperClass(MOMap.class);

 job.setReducerClass(MOReduce.class);

 ...

 job.waitForCompletion(true);

减速器中的用法:

private MultipleOutputs out;

 public void setup(Context context) {

   out = new MultipleOutputs(context);

   ...

 }

 public void reduce(Text key, Iterable values, Context context) throws IOException, InterruptedException {

//'/' characters in baseOutputPath will be translated into directory levels in your file system. Also, append your custom-generated path with "part" or similar, otherwise your output will be -00000, -00001 etc. No call to context.write() is necessary.
 for (Text line : values) {

    if(line == type1)
      out.write(key, new Text(line),"file1/part");

  else  if(line == type2)
      out.write(key, new Text(line),"file2/part");

 else   if(line == type3)
      out.write(key, new Text(line),"file3/part");
   }
 }

 protected void cleanup(Context context) throws IOException, InterruptedException {
       out.close();
   }

REF:https://hadoop.apache.org/docs/r2.6.3/api/org/apache/hadoop/mapreduce/lib/output/MultipleOutputs.html