Hadoop INFO ipc.Client:重试连接到服务器localhost / 127.0.0.1:9000

时间:2016-08-29 19:33:50

标签: linux hadoop hdfs hadoop-partitioning

我阅读了有关Hadoop的HDFS配置问题的其他帖子。但是,它们都没有帮助。所以,我发布了我的问题。我按照this教程了解了hadoop v1.2.1。当我运行hadoop fs -ls命令时,我遇到了这个错误:

16/08/29 15:20:35 INFO ipc.Client: Retrying connect to server:   localhost/127.0.0.1:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

我的core-site.xml文件是:

<configuration>
<property>
  <name>fs.default.name</name>
    <value>hdfs://localhost:9000</value>
</property>

<property>
  <name>hadoop.tmp.dir</name>
    <value>/mnt/miczfs/hadoop/tmp/${user.name}</value>
</property>
</configuration>

另外,我的hdfs-site.xml文件如下:

<configuration>
<property>
 <name>dfs.replication</name>
 <value>1</value>
    </property>
<property>
   <name>dfs.name.dir</name>
    <value>/mnt/miczfs/hadoop/hdfs/${user.name}/namenode</value>
</property>
 <property>
    <name>dfs.secondary.http.address</name>
    <value>localhost:0</value>
 </property>
<property>
  <name>dfs.data.dir</name>
    <value>/mnt/miczfs/hadoop/hdfs/${user.name}/datanode</value>
</property>
<property>
   <name>dfs.datanode.address</name>
   <value>localhost:0</value>
</property>
<property>
   <name>dfs.datanode.http.address</name>
   <value>localhost:0</value>
</property>
<property>
   <name>dfs.datanode.ipc.address</name>
   <value>localhost:0</value>
</property>
</configuration>

并且/ etc / hosts是这样的:

127.0.0.1   localhost localhost.localdomain localhost4  localhost4.localdomain4
::1         localhost localhost.localdomain localhost6    localhost6.localdomain6
 172.31.1.1      micrasrelmond.local micrasrelmond #Generated-by-micctrl
 172.31.1.1      mic0.local mic0 #Generated-by-micctrl

如果有可能,请帮助我。感谢

1 个答案:

答案 0 :(得分:1)

首先通过jps命令检查namenode是否正在运行。如果它正在运行,则通过命令bin / hadoop namenode -format格式化名称节点。

为了避免在每次重启后格式化namenode,请将hdfs default directory更改为其他位置。