使用嵌套交叉验证的基准实验对功能的重要性

时间:2019-12-12 14:44:21

标签: r machine-learning mlr gini

我正在R中使用mlr包来比较一个二进制分类任务中的两个学习者,即随机森林和套索分类器。我想以类似于caret::varImp()的方式来提取功能对于最佳分类器(在这种情况下为随机森林)的重要性。我碰到了getBMRFeatSelResults()getFeatureImportance()generateFeatureImportanceData(),但似乎没人能解决问题。这是我使用嵌套重采样进行基准实验的代码。理想情况下,我希望平均基尼系数下降。谢谢。

library(easypackages)

libraries("mlr","purrr","glmnet","parallelMap","parallel")

data = read.table("data_past.txt", h = T)

set.seed(123)

task = makeClassifTask(id = "past_history", data = data, target = "DIAG", positive = "BD")

ps_rf = makeParamSet(makeIntegerParam("mtry", lower = 4, upper = 16),makeDiscreteParam("ntree", values = 1000))

ps_lasso = makeParamSet(makeNumericParam("s", lower = .01, upper = 1),makeDiscreteParam("alpha", values = 1))

ctrl_rf = makeTuneControlRandom(maxit = 10L)

ctrl_lasso = makeTuneControlRandom(maxit = 100L)

inner = makeResampleDesc("RepCV", fold = 10, reps = 3, stratify = TRUE)

lrn_rf = makeLearner("classif.randomForest", predict.type = "prob", fix.factors.prediction = TRUE)

lrn_rf = makeTuneWrapper(lrn_rf, resampling = inner, par.set = ps_rf, control = ctrl_rf, measures = auc, show.info = FALSE)

lrn_lasso = makeLearner("classif.glmnet", predict.type = "prob", fix.factors.prediction = TRUE)

lrn_lasso = makeTuneWrapper(learner = lrn_lasso, resampling = inner, control = ctrl_lasso,  par.set = ps_lasso, measures = auc, show.info = FALSE)

outer = makeResampleDesc("CV", iters = 10, stratify = TRUE)

lrns = list(lrn_rf, lrn_lasso)

parallelStartMulticore(36)

res = benchmark(lrns, task, outer, measures = list(auc, ppv, npv, fpr, tpr, mmce), show.info = FALSE, model = T)

saveRDS(res, file = "res.rds")

parallelStop()

models <- getBMRModels(res, drop = TRUE)

1 个答案:

答案 0 :(得分:1)

既然您在谈论简历,

  

提取功能对于最佳分类器的重要性

不清楚您想做什么。简历中没有“一个最好的单一模型”,通常重要性也无法在简历中衡量。

CV旨在估计/比较预测性能,而不是计算/解释特征重要性。

Here是对可能会有所帮助的类似问题的解答。

  

我遇到了getBMRFeatSelResults(),getFeatureImportance(),   generateFeatureImportanceData(),但似乎没有办法解决问题。

通过做出这样的陈述,将有助于了解为什么这些功能没有详细说明您想要做的事情,而不仅仅是陈述事实:)

相关问题