ColumnFamilyInputFormat - 无法获取输入拆分

时间:2012-11-26 14:33:00

标签: hadoop nosql cassandra elastic-map-reduce

当我尝试通过使用ColumnFamilyInputFormat类从hadoop访问Cassandra时,我得到了一个奇怪的异常。 在我的hadoop过程中,这是我连接到cassandra之后,包括cassandra-all.jar版本1.1:

private void setCassandraConfig(Job job) {
    job.setInputFormatClass(ColumnFamilyInputFormat.class);
    ConfigHelper.setInputRpcPort(job.getConfiguration(), "9160");
    ConfigHelper
        .setInputInitialAddress(job.getConfiguration(), "204.236.1.29");
    ConfigHelper.setInputPartitioner(job.getConfiguration(),
            "RandomPartitioner");
    ConfigHelper.setInputColumnFamily(job.getConfiguration(), KEYSPACE,
            COLUMN_FAMILY);
    SlicePredicate predicate = new SlicePredicate().setColumn_names(Arrays
            .asList(ByteBufferUtil.bytes(COLUMN_NAME)));
    ConfigHelper.setInputSlicePredicate(job.getConfiguration(), predicate);
    // this will cause the predicate to be ignored in favor of scanning
    // everything as a wide row
    ConfigHelper.setInputColumnFamily(job.getConfiguration(), KEYSPACE,
            COLUMN_FAMILY, true);
    ConfigHelper.setOutputInitialAddress(job.getConfiguration(),
            "204.236.1.29");
    ConfigHelper.setOutputPartitioner(job.getConfiguration(),
            "RandomPartitioner");
}

public int run(String[] args) throws Exception {
    // use a smaller page size that doesn't divide the row count evenly to
    // exercise the paging logic better
    ConfigHelper.setRangeBatchSize(getConf(), 99);

    Job processorJob = new Job(getConf(), "dmp_normalizer");
    processorJob.setJarByClass(DmpProcessorRunner.class);
    processorJob.setMapperClass(NormalizerMapper.class);
    processorJob.setReducerClass(SelectorReducer.class);
    processorJob.setOutputKeyClass(Text.class);
    processorJob.setOutputValueClass(Text.class);
    FileOutputFormat
            .setOutputPath(processorJob, new Path(TEMP_PATH_PREFIX));
    processorJob.setOutputFormatClass(TextOutputFormat.class);
    setCassandraConfig(processorJob);
    ...
}

但是当我运行hadoop(我在亚马逊EMR上运行它)时,我得到了例外。不是ip是127.0.0.1而不是我想要的ip ...

任何提示?我能做错什么?

2012-11-22 21:37:34,235 ERROR org.apache.hadoop.security.UserGroupInformation (Thread-6): PriviledgedActionException as:hadoop cause:java.io.IOException: Could not get input splits 
2012-11-22 21:37:34,235 INFO org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob (Thread-6): dmp_normalizer got an error while submitting java.io.IOException: Could not get input splits at 
    org.apache.cassandra.hadoop.ColumnFamilyInputFormat.getSplits(ColumnFamilyInputFormat.java:178) at 
    org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1017) at 
    org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1034) at 
    org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:174) at 
    org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:952) at 
    org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:905) at 
    java.security.AccessController.doPrivileged(Native Method) at 
    javax.security.auth.Subject.doAs(Subject.java:396) at 
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1132) at 
    org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:905) at 
    org.apache.hadoop.mapreduce.Job.submit(Job.java:500) at 
    org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:336) at 
    org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl.run(JobControl.java:233) at 
    java.lang.Thread.run(Thread.java:662) Caused by: java.util.concurrent.ExecutionException: java.io.IOException: failed connecting to all endpoints 127.0.0.1 at 
    java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222) at 
    java.util.concurrent.FutureTask.get(FutureTask.java:83) at 
    org.apache.cassandra.hadoop.ColumnFamilyInputFormat.getSplits(ColumnFamilyInputFormat.java:174) ... 13 more Caused by: java.io.IOException: failed connecting to all endpoints 127.0.0.1 at 
    org.apache.cassandra.hadoop.ColumnFamilyInputFormat.getSubSplits(ColumnFamilyInputFormat.java:272) at 
    org.apache.cassandra.hadoop.ColumnFamilyInputFormat.access$200(ColumnFamilyInputFormat.java:77) at 
    org.apache.cassandra.hadoop.ColumnFamilyInputFormat$SplitCallable.call(ColumnFamilyInputFormat.java:211) at 
    org.apache.cassandra.hadoop.ColumnFamilyInputFormat$SplitCallable.call(ColumnFamilyInputFormat.java:196) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at 
    java.util.concurrent.FutureTask.run(FutureTask.java:138) at 
    java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at 
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) ... 1 more 
2012-11-22 21:37:39,319 INFO com.s1mbi0se.dmp.processor.main.DmpProcessorRunner (main): Process ended

2 个答案:

答案 0 :(得分:1)

我能够通过更改cassandra配置来解决问题。 listen_address必须是一个有效的外部IP才能工作。

这个例外似乎没有与它有关,我花了很长时间才找到答案。最后,如果您在cassandra配置中指定0.0.0.0并尝试从外部IP访问它,则会收到此错误,表示在127.0.0.1处找不到主机。

答案 1 :(得分:-1)

在我的情况下,键空间名称问题是错误的,请仔细查看传递给ConfigHelper.setInputColumnFamily方法的内容。

相关问题