YarnAllocator请求的容器比我要求的

时间:2018-06-08 08:49:02

标签: apache-spark yarn resourcemanager

YarnAllocator和Yarn Resource Manager行动得如此慷慨,以至于它要求并给予的不仅仅是我配置的配置。我总共要了72个容器,它给了133个容器。我的期望是YarnAllocator将只分配我问的数量。有人可以解释一下发生了什么吗?

以下是从日志中捕获的请求

18/06/08 06:52:29 INFO yarn.YarnAllocator: Will request 72 executor container(s), each with 4 core(s) and 11264 MB memory (including 3072 MB of overhead)
18/06/08 06:52:29 INFO yarn.YarnAllocator: Submitted 72 unlocalized container requests.
...
18/06/08 06:52:30 INFO yarn.YarnAllocator: Will request 8 executor container(s), each with 4 core(s) and 11264 MB memory (including 3072 MB of overhead)
18/06/08 06:52:30 INFO yarn.YarnAllocator: Submitted 8 unlocalized container requests.
...
18/06/08 06:52:31 INFO yarn.YarnAllocator: Will request 53 executor container(s), each with 4 core(s) and 11264 MB memory (including 3072 MB of overhead)
18/06/08 06:52:32 INFO yarn.YarnAllocator: Submitted 53 unlocalized container requests.

这是我的火花配置:

--driver-memory 4g \
--executor-memory 8g \
--executor-cores 4 \
--num-executors 72 \
--conf spark.yarn.executor.memoryOverhead=3072 \
--conf spark.executor.extraJavaOptions="-XX:+UseG1GC" \
--conf spark.yarn.max.executor.failures=128 \
--conf spark.memory.fraction=0.1 \
--conf spark.rdd.compress=true \
--conf spark.shuffle.compress=true \
--conf spark.shuffle.service.enabled=true \
--conf spark.shuffle.spill.compress=true \
--conf spark.speculation=false \
--conf spark.task.maxFailures=1000 \
--conf spark.sql.codegen.wholeStage=false \
--conf spark.scheduler.listenerbus.eventqueue.size=100000 \
--conf spark.shuffle.service.enabled=false \

0 个答案:

没有答案
相关问题