mapreduce作业没有被执行

时间:2015-03-02 22:12:53

标签: hadoop mapreduce avro

我有一个简单的MapReduce作业,我从avro website借用了一些小修改(我删除了reducer)。它基本上需要一个简单的avro文件作为输入。这是avro文件的架构

avro架构:

 {
"type": "record",
"name": "User",
"fields": [
{"name": "name", "type": "string"},
{"name": "favorite_number",  "type": "int"},
{"name":"favorite_color", "type": "string"}
]
}

这是我的mapreduce工作(mapper和main函数):

public class ColorCountMapper extends Mapper<AvroKey<User>, NullWritable, Text, IntWritable> {

@Override
public void map(AvroKey<User> key, NullWritable value, Context context)  throws IOException, InterruptedException {

  CharSequence color = key.datum().getFavoriteColor();
  if (color == null) {
    color = "none";
  }
  context.write(new Text(color.toString()), new IntWritable(1));
}

}

public static void main(String[] args) throws Exception {


    Configuration conf = new Configuration();
    Job job = Job.getInstance(conf, "TestColor");
    job.setJarByClass(runClass.class);
    job.setJobName("Color Count");

    FileInputFormat.setInputPaths(job, new Path("in"));
    FileOutputFormat.setOutputPath(job, new Path("out"));

    job.setInputFormatClass(AvroKeyInputFormat.class);
    job.setMapperClass(ColorCountMapper.class);
    AvroJob.setInputKeySchema(job, User.getClassSchema());
    job.setMapOutputKeyClass(Text.class);
    job.setMapOutputValueClass(IntWritable.class);


    boolean r = job.waitForCompletion(true);
    System.out.println(r);  
}     

当我运行程序时,它返回false并且不成功。我无法弄清楚这个问题。有人可以帮忙吗?

1 个答案:

答案 0 :(得分:0)

您已将Mapper的值类型设置为 NullWritable 。然后在main / driver中将Map-output值类设置为 IntWritable 。 Mapper中的值类型和主/驱动程序应该相同。相应地修改您的程序。如果你有解决方案,请接受我的答案。