我有一个伪分布式hadoop集群,作为docker容器运行
docker run -d -p 50070:50070 -p 9000:9000 -p 8032:8032 -p 8088:8088 --name had00p sequenceiq/hadoop-docker:2.6.0 /etc/bootstrap.sh -d
此处的配置为:https://github.com/sequenceiq/docker-hadoop-ubuntu/
我可以成功处理hdfs,访问ui,但坚持提交java工作,我得到了
ClassNotFoundException:类com.github.mikhailerofeev.hadoop.Script $ MyMapper not found
以下是示例代码:
@Override
public Configuration getConf() {
String host = BOOT_TO_DOCKER_IP;
int nameNodeHdfsPort = 9000;
int yarnPort = 8032;
String yarnAddr = host + ":" + yarnPort;
String hdfsAddr = "hdfs://" + host + ":" + nameNodeHdfsPort + "/";
Configuration configutation = new Configuration();
configutation.set("yarn.resourcemanager.address", yarnAddr);
configutation.set("mapreduce.framework.name", "yarn");
configutation.set("fs.default.name", hdfsAddr);
return configutation;
}
private void simpleMr(String inputPath) throws IOException {
JobConf conf = new JobConf(getConf(), Script.class);
conf.setJobName("fun");
conf.setJarByClass(MyMapper.class);
conf.setMapperClass(MyMapper.class);
conf.setInputFormat(TextInputFormat.class);
conf.setOutputFormat(TextOutputFormat.class);
FileInputFormat.setInputPaths(conf, inputPath);
String tmpMRreturn = "/user/m-erofeev/map-test.data";
Path returnPath = new Path(tmpMRreturn);
FileOutputFormat.setOutputPath(conf, returnPath);
AccessUtils.execAsRootUnsafe(() -> {
FileSystem fs = FileSystem.get(getConf());
if (fs.exists(returnPath)) {
fs.delete(returnPath, true);
}
});
AccessUtils.execAsRootUnsafe(() -> {
RunningJob runningJob = JobClient.runJob(conf);
runningJob.waitForCompletion();
});
}
这是AccessUtils.execAsRootUnsafe - 围绕UserGroupInformation,它可以正常使用hdfs。
我哪里错了?
upd :我意识到,它应该因为hadoop使用java 7而失败,但是我的java 8,并计划稍后检查。但我预计在这种情况下会有另一条失败消息...... upd2 切换到java7并没有什么不同。
答案 0 :(得分:0)
我的错误,我正在运行脚本而不将其打包到jar(来自IDE),因此方法getJarByClas()没有意义。