获取spark-submit参数的值:num-executors,executor-cores和executor-memory

时间:2019-01-07 20:40:41

标签: apache-spark

我在SO上阅读了有关此主题的许多问题,并且编写了一个谦虚的bash脚本来快速获得这些值。

创建脚本的主要来源是:

该脚本包含以下内容:

# fixed values
CORES_PER_EXECUTOR=5    #  (for good HDFS throughput) --executor-cores
HADOOP_DAEMONS_CORE=1
HADOOP_DAEMONS_RAM=1


# Example values
# This information can be obtained using the `lscpu` command
total_nodes_in_cluster=10    # `CPU(s):` field of lscpu
total_cores_per_node=16     # `Core(s) per socket:` field of lscpu
total_ram_per_node=64      # using `free -h` command

available_cores_per_node=$((total_cores_per_node - HADOOP_DAEMONS_CORE))

available_cores_in_cluster=$((available_cores_per_node * total_nodes_in_cluster))

available_executors=$((available_cores_in_cluster / CORES_PER_EXECUTOR))

num_of_executors=$((available_executors - 1 )) # Leaving 1 executor for ApplicationManager

num_of_executors_per_node=$((available_executors / total_nodes_in_cluster))

mem_per_executor=$((total_ram_per_node / num_of_executors_per_node))

# Counting off heap overhead = 7% of `mem_per_executor`GB:
# TODO "Counting off heap overhead = 7% of 21GB = 3GB. ???
seven_percent=$((mem_per_executor / 7))
executor_memory=$((mem_per_executor - seven_percent))

echo -e "The command will contains:\n spark-submit --class <CLASS_NAME> --num-executors ${num_of_executors} --executor-cores ${CORES_PER_EXECUTOR} --executor-memory ${executor_memory}G ...."

我想知道:

  • 有人可以帮助我理解“计算堆开销= mem_per_executor GB的7%:”部分吗?我的意思是,我做了数学运算,但是我不理解背后的想法。
  • 如果脚本是正确的,并且值是针对实际集群获取的,那么如果我已经很好地运行了同一作业但没有任何这些参数,为什么Spark作业像“卡住了”?
  • 有任何想法可以添加到此脚本中并获得更多参数发送给spark-submit吗?

谢谢!

0 个答案:

没有答案
相关问题