SPARK,ML,Tuning,CrossValidator:访问指标

时间:2016-01-08 13:59:17

标签: apache-spark apache-spark-mllib apache-spark-ml

为了构建NaiveBayes多类分类器,我使用CrossValidator来选择管道中的最佳参数:

val cv = new CrossValidator()
        .setEstimator(pipeline)
        .setEstimatorParamMaps(paramGrid)
        .setEvaluator(new MulticlassClassificationEvaluator)
        .setNumFolds(10)

val cvModel = cv.fit(trainingSet)

管道包含通常的变换器和估算器,顺序如下:Tokenizer,StopWordsRemover,HashingTF,IDF,最后是NaiveBayes。

是否可以访问为最佳模型计算的指标?

理想情况下,我想访问所有模型的指标,以了解更改参数如何改变分类的质量。 但目前,最好的模型还不错。

仅供参考,我使用Spark 1.6.0

2 个答案:

答案 0 :(得分:9)

我是这样做的:

val pipeline = new Pipeline()
  .setStages(Array(tokenizer, stopWordsFilter, tf, idf, word2Vec, featureVectorAssembler, categoryIndexerModel, classifier, categoryReverseIndexer))

...

val paramGrid = new ParamGridBuilder()
  .addGrid(tf.numFeatures, Array(10, 100))
  .addGrid(idf.minDocFreq, Array(1, 10))
  .addGrid(word2Vec.vectorSize, Array(200, 300))
  .addGrid(classifier.maxDepth, Array(3, 5))
  .build()

paramGrid.size // 16 entries

...

// Print the average metrics per ParamGrid entry
val avgMetricsParamGrid = crossValidatorModel.avgMetrics

// Combine with paramGrid to see how they affect the overall metrics
val combined = paramGrid.zip(avgMetricsParamGrid)

...

val bestModel = crossValidatorModel.bestModel.asInstanceOf[PipelineModel]

// Explain params for each stage
val bestHashingTFNumFeatures = bestModel.stages(2).asInstanceOf[HashingTF].explainParams
val bestIDFMinDocFrequency = bestModel.stages(3).asInstanceOf[IDFModel].explainParams
val bestWord2VecVectorSize = bestModel.stages(4).asInstanceOf[Word2VecModel].explainParams
val bestDecisionTreeDepth = bestModel.stages(7).asInstanceOf[DecisionTreeClassificationModel].explainParams

答案 1 :(得分:1)

 cvModel.avgMetrics

适用于pyspark 2.2.0