Hadoop Job抛出java.io.IOException:尝试从封闭流中读取

时间:2013-01-07 20:35:53

标签: hadoop mapreduce

我正在运行一个简单的map-reduce工作。此作业使用来自常见爬网数据的250个文件。

e.g。 S3:// AWS-publicdatasets /共爬行/解析输出/段/ 1341690169105 /

如果我使用50个,100个文件,一切正常。但有250个文件我得到了这个错误

java.io.IOException: Attempted read from closed stream.
    at org.apache.commons.httpclient.ContentLengthInputStream.read(ContentLengthInputStream.java:159)
    at java.io.FilterInputStream.read(FilterInputStream.java:116)
    at org.apache.commons.httpclient.AutoCloseInputStream.read(AutoCloseInputStream.java:107)
    at org.jets3t.service.io.InterruptableInputStream.read(InterruptableInputStream.java:76)
    at org.jets3t.service.impl.rest.httpclient.HttpMethodReleaseInputStream.read(HttpMethodReleaseInputStream.java:136)
    at org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsInputStream.read(NativeS3FileSystem.java:111)
    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
    at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
    at java.io.DataInputStream.readByte(DataInputStream.java:248)
    at org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:299)
    at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:320)
    at org.apache.hadoop.io.SequenceFile$Reader.readBuffer(SequenceFile.java:1707)
    at org.apache.hadoop.io.SequenceFile$Reader.seekToCurrentValue(SequenceFile.java:1773)
    at org.apache.hadoop.io.SequenceFile$Reader.getCurrentValue(SequenceFile.java:1849)
    at org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader.nextKeyValue(SequenceFileRecordReader.java:74)
    at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:532)
    at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
    at org.apache.hadoop.mapreduce.lib.map.MultithreadedMapper$SubMapRecordReader.nextKeyValue(MultithreadedMapper.java:180)
    at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
    at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
    at org.apache.hadoop.mapreduce.lib.map.MultithreadedMapper$MapRunner.run(MultithreadedMapper.java:268)

任何线索?

1 个答案:

答案 0 :(得分:0)

您需要处理输入多少个地图槽?它接近100?

这是猜测,但是当您处理第一批文件时,与S3的连接可能会超时,并且当插槽可用于处理更多文件时,连接不再打开。我相信NativeS3FileSystem的超时错误显示为IOExceptions。

相关问题