Datanode和namenode未运行

时间:2015-08-18 07:07:53

标签: hadoop

这是jps的结果。

ibrahim@Fatima:~$ jps
24019 SecondaryNameNode
24293 NodeManager
24692 Jps
24167 ResourceManager

ibrahim@Fatima:~$ netstat -plten | grep java
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
tcp        0      0 0.0.0.0:50090           0.0.0.0:*               LISTEN      1000       185862      24019/java      
tcp6       0      0 :::8088                 :::*                    LISTEN      1000       190431      24167/java      
tcp6       0      0 :::8030                 :::*                    LISTEN      1000       187327      24167/java      
tcp6       0      0 :::8031                 :::*                    LISTEN      1000       187319      24167/java      
tcp6       0      0 :::8032                 :::*                    LISTEN      1000       187758      24167/java      
tcp6       0      0 :::8033                 :::*                    LISTEN      1000       190439      24167/java      
tcp6       0      0 :::45700                :::*                    LISTEN      1000       190650      24293/java      
tcp6       0      0 :::8040                 :::*                    LISTEN      1000       190658      24293/java      
tcp6       0      0 :::8042                 :::*                    LISTEN      1000       190663      24293/java      

目录/ app / hadoop / tmp / dfs只包含一个文件夹 - namesecondary - 为空。我在这里看到的解决方案建议将namespaceID从/ app / hadoop / tmp / dfs / name / current / VERSION复制到/ app / hadoop / tmp / dfs / data / current / VERSION,但这些目录不在哪里在namenode格式中创建。

我丢失了hdfs及其上的所有数据,因为在namenode格式下运行的守护程序。

1 个答案:

答案 0 :(得分:0)

  1. 编辑hdfs-site.xml,如下所示:

    <property>
    <name>dfs.replication</name>
      <value>1</value>
      <description>Default block replication.
      The actual number of replications can be specified when the file is created.
      The default is used if replication is not specified in create time.
      </description>
     </property>
     <property>
       <name>dfs.namenode.name.dir</name>
      <value>/app/hadoop/tmp/dfs/name</value>
    <final>true</final>
     </property>
     <property>
       <name>dfs.datanode.data.dir</name>
    <value>/app/hadoop/tmp/dfs/data</value>
    <final>true</final>
     </property>
    
  2. sudo -rm -r -f / app / hadoop / tmp

  3. sudo mkdir chown用户名:hadoopgroup / app / hadoop / tmp

  4. stop-all.sh

  5. hadoop namenode -format

  6. JPS

  7. https://groups.google.com/forum/#!topic/chennaihug/g_uJVIjykXk中找到了解决方案,但进行了一些更改。