运行mapreduce作业时SocketTimeout

时间:2015-02-16 07:19:15

标签: hadoop mapreduce hbase hdfs

我们在mapreduce作业中收到一些警告,同时从datanode读取和写入数据,但它并没有中止作业。这个错误出现在工作的几个地方。看起来像hdfs-site.xml和hbase-site.xml文件中的超时变量问题。

我应该在这些属性文件中更改哪些超时值以及原因?

以下是我们日志文件的摘录。任何帮助都会有很大的帮助。

阅读错误:

filename: trace_log-2015_02_13.gz
extracted name: extractedLogtrace_log-2015_02_13.log
15/02/13 12:51:39 INFO input.FileInputFormat: Total input paths to process : 1
15/02/13 12:51:39 INFO util.NativeCodeLoader: Loaded the native-hadoop     library
15/02/13 12:51:39 WARN snappy.LoadSnappy: Snappy native library not loaded
15/02/13 12:51:39 INFO mapred.JobClient: Running job: job_201410072206_7921
15/02/13 12:51:40 INFO mapred.JobClient:  map 0% reduce 0%
15/02/13 12:51:42 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
15/02/13 12:52:00 WARN hdfs.DFSClient: DFSOutputStream ResponseProcessor exception  for block blk_-2121649173137352050_631454java.net.SocketTimeoutException: 69000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/D1:2011 remote=/D1:2010]
at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
at java.io.DataInputStream.readFully(Unknown Source)
at java.io.DataInputStream.readLong(Unknown Source)
at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PipelineAck.readFields(DataTransferProtocol.java:124)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$ResponseProcessor.run(DFSClient.java:3161)
15/02/13 12:52:00 WARN hdfs.DFSClient: Error Recovery for     blk_-2121649173137352050_631454 bad datanode[0] D1:2010
15/02/13 12:52:00 WARN hdfs.DFSClient: Error Recovery for block blk_-2121649173137352050_631454 in pipeline D1:2010, D2:2010, D0:2010: bad datanode D1:2010

写错误:

java.net.SocketTimeoutException: 480000 millis timeout while waiting for
channel to be ready for write. ch :
java.nio.channels.SocketChannel[connected local=/D1:2010 remote=/
D1:2011] at
org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246) at
org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159) at
org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198) at
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:392) at
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:490) at
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:202) at    org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:104)
at java.lang.Thread.run(Thread.java:662)

0 个答案:

没有答案