Hadoop ClassCastException是InputFormat的默认值

时间:2013-04-04 14:36:10

标签: hadoop mapreduce

我在Hadoop上开始使用我的第一个map-reduce代码时遇到了问题。我从“Hadoop:权威指南”中复制了以下代码,但我无法在单节点Hadoop安装上运行它。

我的代码段:

主要:

Job job = new Job(); 
job.setJarByClass(MaxTemperature.class);
job.setJobName("Max temperature");

FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));

job.setMapperClass(MaxTemperatureMapper.class);
job.setReducerClass(MaxTemperatureReducer.class);

job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);

System.exit(job.waitForCompletion(true) ? 0 : 1);

映射器:

public void map(LongWritable key, Text value, Context context)

减速机:

public void reduce(Text key, Iterable<IntWritable> values,
Context context)

也只从书中挑选了map和reduce函数的实现。但是当我尝试执行此代码时,这就是我得到的错误:

INFO mapred.JobClient: Task Id : attempt_201304021022_0016_m_000000_0, Status : FAILED
    java.lang.ClassCastException: interface javax.xml.soap.Text
    at java.lang.Class.asSubclass(Class.java:3027)
    at org.apache.hadoop.mapred.JobConf.getOutputKeyComparator(JobConf.java:774)
    at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.<init>(MapTask.java:959)
    at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:674)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:756)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
    at org.apache.hadoop.mapred.Child.main(Child.java:249)

过去类似问题的答案(Hadoop type mismatch in key from map expected value Text received value LongWritable)帮助我弄清楚InputFormatClass应该与map函数的输入匹配。所以我也尝试使用job.setInputFormatClass(TextInputFormat.class);在我的主要方法中,但它也没有解决问题。这可能是什么问题?

这是Mapper类的实现

import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;

public class MaxTemperatureMapper extends Mapper<LongWritable, Text, Text, IntWritable>     {

private static final int MISSING = 9999;

@Override
public void map(LongWritable key, Text value, Context context)
  throws IOException, InterruptedException {

  String line = value.toString();
  String year = line.substring(15, 19);

  int airTemperature;
  if (line.charAt(45) == '+') { // parseInt doesn't like leading plus signs
    airTemperature = Integer.parseInt(line.substring(46, 50));
  } else {
    airTemperature = Integer.parseInt(line.substring(45, 50));
  }
  String quality = line.substring(50, 51);
  if (airTemperature != MISSING && quality.matches("[01459]")) {
    context.write(new Text(year), new IntWritable(airTemperature));
  }
 }

}

2 个答案:

答案 0 :(得分:3)

您自动导入了错误的导入。您导入了 import javax.xml.soap.Text

,而不是导入 org.apache.hadoop.io.Text

您可以在此blog中找到错误导入的示例。

答案 1 :(得分:2)

看起来你导入了错误的Text类(javax.xml.soap.Text)。你想要org.apache.hadoop.io.Text

相关问题