mrjob返回了非零退出状态256

时间:2018-08-31 04:16:29

标签: python hadoop mrjob common-crawl

我是MapReduce的新手,我正尝试使用mrjob的python包运行Map Reduce作业。但是,我遇到此错误:

ERROR:mrjob.launch:Step 1 of 1 failed: Command '['/usr/bin/hadoop', 'jar', '/usr/lib/hadoop-mapreduce/hadoop-streaming.jar', '-files', 
'hdfs:///user/hadoop/tmp/mrjob/word_count.hadoop.20180831.035452.437014/files/mrjob.zip#mrjob.zip,
hdfs:///user/hadoop/tmp/mrjob/word_count.hadoop.20180831.035452.437014/files/setup-wrapper.sh#setup-wrapper.sh,
hdfs:///user/hadoop/tmp/mrjob/word_count.hadoop.20180831.035452.437014/files/word_count.py#word_count.py', '-archives', 
'hdfs:///user/hadoop/tmp/mrjob/word_count.hadoop.20180831.035452.437014/files/word_count_ccmr.tar.gz#word_count_ccmr.tar.gz', '-D', 
'mapreduce.job.maps=4', '-D', 'mapreduce.job.reduces=4', '-D', 'mapreduce.map.java.opts=-Xmx1024m', '-D', 'mapreduce.map.memory.mb=1200', '-D', 
'mapreduce.output.fileoutputformat.compress=true', '-D', 'mapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.BZip2Codec', '-D', 
'mapreduce.reduce.java.opts=-Xmx1024m', '-D', 'mapreduce.reduce.memory.mb=1200', '-input', 'hdfs:///user/hadoop/test-1.warc', '-output', 
'hdfs:///user/hadoop/gg', '-mapper', 'sh -ex setup-wrapper.sh python word_count.py --step-num=0 --mapper', '-combiner', 
'sh -ex setup-wrapper.sh python word_count.py --step-num=0 --combiner', '-reducer', 'sh -ex setup-wrapper.sh python word_count.py --step-num=0 --reducer']' 
returned non-zero exit status 256

我尝试使用python ./word_count.py input/test-1.warc > output在本地运行它,并且成功。

我正在使用

  1. python 2.7.14
  2. Hadoop 2.8.3-amzn-1
  3. pip 18.0
  4. mrjob 0.6.4

有什么想法吗?谢谢!

这是我运行mapreduce作业的命令。我是从cc-mrjob存储库中获得的。该文件名为run_hadoop.sh,我使用的是chmod +x run_hadoop.sh

#!/bin/sh

JOB="$1"
INPUT="$2"
OUTPUT="$3"

sudo chmod +x $JOB.py

if [ -z "$JOB" ] || [ -z "$INPUT" ] || [ -z "$OUTPUT" ]; then
    echo "Usage: $0 <job> <input> <outputdir>"
    echo "  Run a CommonCrawl mrjob on Hadoop"
    echo
    echo "Arguments:"
    echo "  <job>     CCJob implementation"
    echo "  <input>   input path"
    echo "  <output>  output path (must not exist)"
    echo
    echo "Example:"
    echo "  $0 word_count input/test-1.warc  hdfs:///.../output/"
    echo
    echo "Note: don't forget to adapt the number of maps/reduces and the memory requirements"
    exit 1
fi

# strip .py from job name
JOB=${JOB%.py}

# wrap Python files for deployment, cf. below option --setup,
# see for details
# http://pythonhosted.org/mrjob/guides/setup-cookbook.html#putting-your-source-tree-in-pythonpath
tar cvfz ${JOB}_ccmr.tar.gz *.py

# number of maps resp. reduces 
NUM_MAPS=4
NUM_REDUCES=4

if [ -n "$S3_LOCAL_TEMP_DIR" ]; then
    S3_LOCAL_TEMP_DIR="--s3_local_temp_dir=$S3_LOCAL_TEMP_DIR"
else
    S3_LOCAL_TEMP_DIR=""
fi
python $JOB.py \
       -r hadoop \
       --jobconf "mapreduce.map.memory.mb=1200" \
       --jobconf "mapreduce.map.java.opts=-Xmx1024m" \
       --jobconf "mapreduce.reduce.memory.mb=1200" \
       --jobconf "mapreduce.reduce.java.opts=-Xmx1024m" \
       --jobconf "mapreduce.output.fileoutputformat.compress=true" \
       --jobconf "mapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.BZip2Codec" \
       --jobconf "mapreduce.job.reduces=$NUM_REDUCES" \
       --jobconf "mapreduce.job.maps=$NUM_MAPS" \
       --setup 'export PYTHONPATH=$PYTHONPATH:'${JOB}'_ccmr.tar.gz#/' \
       --no-output \
       --cleanup NONE \
       $S3_LOCAL_TEMP_DIR \
       --output-dir "$OUTPUT" \
       "hdfs:///user/hadoop/$INPUT"

我用./run_hadoop.sh word_count test-1.warc output

运行它

其中

  • word_count是工作(名为word_count.py的文件)
  • test-1.warc是输入(位于hdfs:///user/hadoop/test-1.warc中)
  • 并且output是输出目录(位于hdfs:///user/hadoop/output中),并且我还确保始终对不同的作业使用不同的输出以防止文件夹重复

*更新*

我看了一下HUE界面中的系统日志。还有这个错误

org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Could not deallocate container for task attemptId attempt_1536113332062_0001_r_000003_0

这与我遇到的错误有关吗?

我也在地图尝试的标准之一中得到了这个

/bin/sh: run_prestart: line 1: syntax error: unexpected end of file

No module named boto3

但是,我在emr中使用pip install boto3安装了boto3。该模块在hadoop中不可用吗?

1 个答案:

答案 0 :(得分:0)

通过关注此博客可以使它正常工作

http://benjamincongdon.me/blog/2018/02/02/MapReduce-on-Python-is-better-with-MRJob-and-EMR/

本质上,

您必须在hadoop中包含一个用于跑步运动员的.conf文件。例如mrjob.conf

在该文件内,使用此

runners:
  hadoop:
    setup:
      - 'set -e'
      - VENV=/tmp/$mapreduce_job_id
      - if [ ! -e $VENV ]; then virtualenv $VENV; fi
      - . $VENV/bin/activate
      - 'pip install boto3'
      - 'pip install warc'
      - 'pip install https://github.com/commoncrawl/gzipstream/archive/master.zip'
    sh_bin: '/bin/bash -x'

并通过将conf文件引用到run_hadoop.sh

来使用
python $JOB.py \
        --conf-path mrjob.conf \ <---- OUR CONFIG FILE
        -r hadoop \
        --jobconf "mapreduce.map.memory.mb=1200" \
        --jobconf "mapreduce.map.java.opts=-Xmx1024m" \
        --jobconf "mapreduce.reduce.memory.mb=1200" \
        --jobconf "mapreduce.reduce.java.opts=-Xmx1024m" \
        --jobconf "mapreduce.output.fileoutputformat.compress=true" \
        --jobconf "mapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.BZip2Codec" \
        --jobconf "mapreduce.job.reduces=$NUM_REDUCES" \
        --jobconf "mapreduce.job.maps=$NUM_MAPS" \
        --setup 'export PYTHONPATH=$PYTHONPATH:'${JOB}'_ccmr.tar.gz#/' \
        --cleanup NONE \
        $S3_LOCAL_TEMP_DIR \
        --output-dir "hdfs:///user/hadoop/$OUTPUT" \
        "hdfs:///user/hadoop/$INPUT"

现在,如果您拨打./run_hadoop.sh word_count input/test-1.warc output,它应该可以工作!

相关问题