除了挖掘日志输出的日志文件之外,有没有办法获得Spark跟踪URL?

时间:2017-04-22 10:50:01

标签: hadoop apache-spark

我有一个创建Spark会话的Scala应用程序,并且我已经设置了使用Spark REST API的运行状况检查。 Spark应用程序本身在Hadoop Yarn上运行。当前通过读取创建Spark会话时生成的Spark日志记录来检索REST API URL。这在大多数情况下都有效,但在我的应用程序中有一些边缘情况,它不能很好地工作。 有没有人知道另一种获取此跟踪网址的方式?

2 个答案:

答案 0 :(得分:0)

“您可以通过读取YARN配置中的yarn.resourcemanager.webapp.address值和应用程序ID(在侦听器总线上发送的事件以及现有的SparkContext方法中公开)来执行此操作。”

按照开发人员在https://issues.apache.org/jira/browse/SPARK-20458

的回复中复制上述段落

更新:

我确实尝试了解决方案并且非常接近。以下是构建该URL的一些Scala / Spark代码:

@transient val ssc: StreamingContext = StreamingContext.getActiveOrCreate(rabbitSettings.checkpointPath, CreateStreamingContext)

// Update yarn logs URL in Elasticsearch
YarnLogsTracker.update(
  ssc.sparkContext.uiWebUrl,
  ssc.sparkContext.applicationId,
  "test2")

YarnLogsTracker对象是这样的:

object YarnLogsTracker {

  private def recoverURL(u: Option[String]): String = u match {
    case Some(a) => a.split(":").take(2).mkString(":")
    case None => ""
  }

 def update(rawUrl: Option[String], rawAppId: String, tenant: String): Unit = {
   val logUrl = s"${recoverURL(rawUrl)}:8042/node/containerlogs/container${rawAppId.substring(11)}_01_000002/$tenant/stdout/?start=-4096"
...

产生如下内容:http://10.99.25.146:8042/node/containerlogs/container_1516203096033_91164_01_000002/test2/stdout/?start=-4096

答案 1 :(得分:0)

我发现了一种“合理”的方法来实现这一目标。显然,最好的方法是让Spark库将它们已经直接获取到启动器应用程序的ApplicationReport公开,因为它们麻烦于设置委派令牌等。但是,这似乎不太可能发生

此方法有两个方面。首先,它尝试自己构建YarnClient,以获取具有权威跟踪URL的ApplicationReport。但是,根据我的经验,这可能会失败(例如:如果作业在CLUSTER模式下运行,并且在--proxy-user处于Kerberized环境中,则将无法正确地对YARN进行身份验证)。

就我而言,我是从驱动程序本身调用此帮助程序方法,并将结果报告回侧面的启动器应用程序。但是,原则上,可以使用Hadoop Configuration的任何地方都应该工作(可能包括启动器应用程序)。显然,您可以根据需要和对复杂性,额外处理的容忍度,使用此实现的“错误”(或同时使用)。

  /**
   * Given a Hadoop {@link org.apache.hadoop.conf.Configuration} and appId, use the YARN API (via an
   * {@link YarnClient} instance) to get the application report, which includes the trackingUrl.  If this fails,
   * then as a fallback, it attempts to "guess" the URL by looking at various YARN configuration properties,
   * and assumes that the URL will be something like: <pre>[yarnWebUI:port]/proxy/[appId]</pre>.
   *
   * @param hadoopConf the Hadoop {@link org.apache.hadoop.conf.Configuration}
   * @param appId the YARN application ID
   * @return the app trackingUrl, either retrieved using the {@link YarnClient}, or manually constructed using
   *         the fallback approach
   */
  public static String getYarnApplicationTrackingUrl(org.apache.hadoop.conf.Configuration hadoopConf, String appId) {
    LOG.debug("Attempting to look up YARN url for applicationId {}", appId);
    YarnClient yarnClient = null;
    try {
      // do not attempt to fail over on authentication error (ex: running with proxy-user and Kerberos)
      hadoopConf.set("yarn.client.failover-max-attempts", "0");
      yarnClient = YarnClient.createYarnClient();
      yarnClient.init(hadoopConf);
      yarnClient.start();

      final ApplicationReport report = yarnClient.getApplicationReport(ConverterUtils.toApplicationId(appId));
      return report.getTrackingUrl();
    } catch (YarnException | IOException e) {
      LOG.warn(
          "{} attempting to get report for YARN appId {}; attempting to use manually constructed fallback",
          e.getClass().getSimpleName(),
          appId,
          e
      );

      String baseYarnWebappUrl;
      String protocol;
      if ("HTTPS_ONLY".equals(hadoopConf.get("yarn.http.policy"))) {
        // YARN is configured to use HTTPS only, hence return the https address
        baseYarnWebappUrl = hadoopConf.get("yarn.resourcemanager.webapp.https.address");
        protocol = "https";
      } else {
        baseYarnWebappUrl = hadoopConf.get("yarn.resourcemanager.webapp.address");
        protocol = "http";
      }

      return String.format("%s://%s/proxy/%s", protocol, baseYarnWebappUrl, appId);
    } finally {
      if (yarnClient != null) {
        yarnClient.stop();
      }
    }
  }