运行map-reduce作业时出错,该作业读取elasticsearch

时间:2014-09-08 11:12:06

标签: java hadoop elasticsearch mapreduce

当我尝试执行从elasticsearch读取数据的map-reduce任务时,我收到以下错误: -

java.lang.Exception: java.lang.RuntimeException: problem advancing post rec#0
    at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
    at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:529)
Caused by: java.lang.RuntimeException: problem advancing post rec#0
    at org.apache.hadoop.mapred.Task$ValuesIterator.next(Task.java:1364)
    at org.apache.hadoop.mapred.ReduceTask$ReduceValuesIterator.moveToNext(ReduceTask.java:220)
    at org.apache.hadoop.mapred.ReduceTask$ReduceValuesIterator.next(ReduceTask.java:216)
    at org.apache.hadoop.mapred.lib.IdentityReducer.reduce(IdentityReducer.java:45)
    at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:444)
    at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
    at org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:319)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
    at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
    at java.util.concurrent.FutureTask.run(FutureTask.java:166)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:724)
Caused by: java.io.IOException: can't find class: org.elasticsearch.hadoop.mr.LinkedMapWritable because org.elasticsearch.hadoop.mr.LinkedMapWritable
    at org.apache.hadoop.io.AbstractMapWritable.readFields(AbstractMapWritable.java:212)
    at org.apache.hadoop.io.MapWritable.readFields(MapWritable.java:169)
    at org.elasticsearch.hadoop.mr.LinkedMapWritable.readFields(LinkedMapWritable.java:148)
    at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:71)
    at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:42)
    at org.apache.hadoop.mapred.Task$ValuesIterator.readNextValue(Task.java:1421)
    at org.apache.hadoop.mapred.Task$ValuesIterator.next(Task.java:1361)
    ... 12 more
14/09/08 16:18:43 INFO mapreduce.Job: Job job_local1675221004_0001 failed with state FAILED due to: NA

我的主要Runner类如下: -

public class Es2 {

        static private final Path TMP_DIR = new Path(Es2.class.getSimpleName()
            + "_TMP_1");

    /**
     * @param args the command line arguments
     */
    public static void main(String[] args) throws IOException{

    JobConf conf = new JobConf();
    conf.set("es.resource", "conceptnet_data/concept");       
    conf.set("es.query", "?q=me*");                 
    conf.setInputFormat(EsInputFormat.class);       
    conf.setMapOutputKeyClass(Text.class);          
    conf.setMapOutputValueClass(LinkedMapWritable.class);
    conf.setOutputKeyClass(Text.class);
    conf.setOutputValueClass(LinkedMapWritable.class);
    conf.setOutputFormat(TextOutputFormat.class);
    conf.setMapperClass(mapper1.class);
    final Path outDir = new Path(TMP_DIR, "out");
    FileOutputFormat.setOutputPath(conf, outDir);
    JobClient.runJob(conf);
    }
}

mapper类如下: -

public class mapper1 extends MapReduceBase implements 
        Mapper<Object,Object,Text,MapWritable>{

 @Override
 public void map(Object key, Object value, OutputCollector<Text,MapWritable> output,
                    Reporter reporter) throws IOException {
   Text docId = (Text) key;
   MapWritable doc = (LinkedMapWritable) value;      
   output.collect(docId,doc);
 }

}

请在这个问题上指导我。

1 个答案:

答案 0 :(得分:0)

我有同样的问题,我通过将elasticsearch-hadoop jar添加到hadoop的类路径来解决它。

尝试这样的事情:

export HADOOP_CLASSPATH=/home/tariq/java/library/elasticsearch-hadoop-mr-2.0.2.jar:/home/tariq/java/library/elasticsearch-hadoop-2.0.2.jar