如何抑制在EMR上运行的spark-sql的INFO消息?

时间:2014-12-14 02:02:28

标签: log4j apache-spark emr

我按照Run Spark and Spark SQL on Amazon Elastic MapReduce

中的说明在EMR上运行Spark
  

本教程将指导您快速安装和运行Spark   在Amazon EMR上进行大规模数据处理的通用引擎   簇。您还将使用在Amazon S3中创建和查询数据集   Spark SQL,并学习如何在Amazon EMR集群上监控Spark   使用Amazon CloudWatch。

我试图通过编辑INFO来取消$HOME/spark/conf/log4j.properties日志无效。

输出如下:

$ ./spark/bin/spark-sql
Spark assembly has been built with Hive, including Datanucleus jars on classpath
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/.versions/2.4.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/.versions/spark-1.1.1.e/lib/spark-assembly-1.1.1-hadoop2.4.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2014-12-14 20:59:01,819 INFO  [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(1009)) - mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive
2014-12-14 20:59:01,825 INFO  [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(1009)) - mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
2014-12-14 20:59:01,825 INFO  [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(1009)) - mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
2014-12-14 20:59:01,825 INFO  [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(1009)) - mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.rack

如何抑制上面的INFO消息?

3 个答案:

答案 0 :(得分:15)

如果您知道要禁止新EMR集群的日志记录,也可以在创建集群时添加配置选项。

EMR接受配置选项为JSON,您可以直接输入AWS控制台,或者在使用CLI时通过文件传入。

在这种情况下,为了将日志级别更改为WARN,这里是JSON:

[
  {
    "classification": "spark-log4j",
    "properties": {"log4j.rootCategory": "WARN, console"}
  }
]

在控制台中,您可以在第一个创建步骤中添加:

configuration in the AWS Console

或者,如果您使用CLI创建群集:

aws emr create-cluster <options here> --configurations config_file.json

您可以阅读更多in the EMR documentation

答案 1 :(得分:13)

我可以根据需要修改$HOME/spark/conf/log4j.properties并使用spark-sql调用--driver-java-options,如下所示:

./spark/bin/spark-sql --driver-java-options "-Dlog4j.configuration=file:///home/hadoop/spark/conf/log4j.properties"

我可以通过将-Dlog4j.debug添加到选项来验证是否正在使用正确的文件:

./spark/bin/spark-sql --driver-java-options "-Dlog4j.debug -Dlog4j.configuration=file:///home/hadoop/spark/conf/log4j.properties"

答案 2 :(得分:5)

spark-sql --driver-java-options&#34; -Dlog4j.configuration = file:///home/hadoop/conf/log4j.properties"

cat conf/log4j.properties

# Set everything to be logged to the console
log4j.rootCategory=WARN, console
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n

# Settings to quiet third party logs that are too verbose
log4j.logger.org.eclipse.jetty=WARN
log4j.logger.org.eclipse.jetty.util.component.AbstractLifeCycle=ERROR
log4j.logger.org.apache.spark.repl.SparkIMain$exprTyper=WARN
log4j.logger.org.apache.spark.repl.SparkILoop$SparkILoopInterpreter=WARN
相关问题