运行Map Reduce作业时获取ClassCastException

时间:2016-01-18 08:48:55

标签: hadoop mapreduce accumulo

我有一个map reduce工作,从Accumulo Table获取数据,执行操作,然后将结果存储在另一个Accumulo Table中。我有以下mapper,combiner和reducer。

class PivotTableMapper extends Mapper<Key, Value, Text, Text> {
    @Override
    public void map(Key k, Value v, Context context) {
        // Doing something here...
        context.write(Text Text);
    }
}

class PivotTableCombiner extends Reducer<Text, Text, Text, Text> {
    @Override
    public void reduce(Text k, Iterable<Text> v, Context context) {
         // Doing something here....
         context.write(Text, Text);
    } 
}

class PivotTableReducer extends Reducer<Text, Text, Text, Mutation> {
    @Override
    public void reduce(Text k, Iterable<Text> v, Context context) {
        // Doing something here....
        context.write(null, Mutation);
    }
}

@Override
public int run(String[] args) {
    Job job = Job.getInstance(conf);
    job.setInputFormatClass(AccumuloInputFormat.class);
    job.setOutputFormatClass(AccumuloOutputFormat.class);
    // Some additional settings
}

当我运行作业时,我得到一个ClassCastException

Error: java.lang.ClassCastException: org.apache.hadoop.io.Text cannot be cast to org.apache.accumulo.core.data.Mutation
        at org.apache.accumulo.core.client.mapreduce.AccumuloOutputFormat$AccumuloRecordWriter.write(AccumuloOutputFormat.java:409)
        at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:658)
        at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
        at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
        at com.latize.ulysses.accumulo.postprocess.PivotTable$PivotTableMapper.map(PivotTable.java:48)
        at com.latize.ulysses.accumulo.postprocess.PivotTable$PivotTableMapper.map(PivotTable.java:1)
        at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

16/01/18 16:16:27 INFO mapreduce.Job:  map 33% reduce 0%
16/01/18 16:16:27 INFO mapreduce.Job: Task Id : attempt_1453096833928_0021_m_000001_1, Status : FAILED
Error: java.lang.ClassCastException: org.apache.hadoop.io.Text cannot be cast to org.apache.accumulo.core.data.Mutation
        at org.apache.accumulo.core.client.mapreduce.AccumuloOutputFormat$AccumuloRecordWriter.write(AccumuloOutputFormat.java:409)
        at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:658)
        at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
        at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
        at com.latize.ulysses.accumulo.postprocess.PivotTable$PivotTableMapper.map(PivotTable.java:48)
        at com.latize.ulysses.accumulo.postprocess.PivotTable$PivotTableMapper.map(PivotTable.java:1)
        at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

有人能告诉我这里做错了什么吗? Aren的班级组合是否正确?

1 个答案:

答案 0 :(得分:1)

通常,在使用AccumuloOutputFormat时不需要Reducer(因为Accumulo本身就像Reducer一样)。你的Mapper会接受并输出。

对于您的特定情况,您的Mappers需要编写,然后由实际的Reducer进行排序/减少。更改Mapper的输出键和值参数化。