为什么Hadoop文件系统会变慢?

时间:2012-07-10 17:57:52

标签: hadoop hdfs


我对Hadoop比较陌生 我在hadoop文件系统中有问题。每当我尝试将任何文件放入HDFS时,它就会卡住或者复制速度非常慢。我认为datanode日志的最后一行将有助于你理解。

2012-07-10 13:24:10,623 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 50020: starting
2012-07-10 13:24:10,627 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(10.104.9.7:50010, storageID=DS-1709612965-10.104.9.7-50010-1341924367318, infoPort=50075, ipcPort=50020)In DataNode.run, data = FSDataset{dirpath='/scratch/hadoop-mah/dfs/data/current'}
2012-07-10 13:24:10,627 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: using BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
2012-07-10 13:24:10,639 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 3 blocks got processed in 7 msecs
2012-07-10 13:24:10,639 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting Periodic block scanner.
2012-07-10 13:24:10,692 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification succeeded for blk_180125215289100238_1017
2012-07-10 13:25:23,304 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_-3055723234233026824_1001 src: /10.104.9.106:57912 dest: /10.104.9.7:50010
2012-07-10 13:25:23,324 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /10.104.9.106:57912, dest: /10.104.9.7:50010, bytes: 4, op: HDFS_WRITE, cliID: DFSClient_843961964, srvID: DS-1709612965-10.104.9.7-50010-1341924367318, blockid: blk_-3055723234233026824_1001
2012-07-10 13:25:23,325 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 0 for block blk_-3055723234233026824_1001 terminating


我不知道该怎么做,我希望我能在这里得到帮助。
谢谢。

0 个答案:

没有答案