hbase importTsv FileNotFoundException

时间:2013-10-23 08:46:18

标签: java hbase

我的配置是hadoop 2.0.0,hbase为0.96。一切都在伪分布式模式下运行。

当我使用以下命令运行importTsv时。

./hbase  org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.columns=HBASE_ROW_KEY,surname,name,age persons 'hdfs://localhost:9000/user/joe/persons.tsv'

它尝试读取不存在的文件hdfs:// localhost:9000 / home / joe / Programs / hbase-0.96.0-hadoop2 / lib / hbase-client-0.96.0-hadoop2.jar ..

堆栈跟踪下方。

非常感谢您的帮助。

2013-10-22 19:33:52,079 INFO  [main] mapreduce.TableOutputFormat: Created table instance for persons
2013-10-22 19:33:53,253 INFO  [main] mapreduce.JobSubmitter: Cleaning up the staging area file:/tmp/hadoop-joe/mapred/staging/joe1659915806/.staging/job_local1659915806_0001
2013-10-22 19:33:53,256 ERROR [main] security.UserGroupInformation: PriviledgedActionException as:joe (auth:SIMPLE) cause:java.io.FileNotFoundException: File does not exist: hdfs://localhost:9000/home/joe/Programs/hbase-0.96.0-hadoop2/lib/hbase-client-0.96.0-hadoop2.jar
Exception in thread "main" java.io.FileNotFoundException: File does not exist: hdfs://localhost:9000/home/joe/Programs/hbase-0.96.0-hadoop2/lib/hbase-client-0.96.0-hadoop2.jar
    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1110)
    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1102)
    at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:288)
    at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:224)
    at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:93)
    at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(ClientDistributedCacheManager.java:57)
    at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:264)
    at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:300)
    at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:387)
    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
    at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
    at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1286)
    at org.apache.hadoop.hbase.mapreduce.ImportTsv.run(ImportTsv.java:480)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
    at org.apache.hadoop.hbase.mapreduce.ImportTsv.main(ImportTsv.java:484)


1 个答案:

答案 0 :(得分:0)

在Hadoop中,配置参数如下:etc / hadoop / mapred-site.xml:

<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
</configuration>

等/ hadoop的/纱-site.xml中:

<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
</configuration>