如何从Sklearn管道中提取要素重要性

时间:2016-08-05 10:59:25

标签: scikit-learn random-forest

我已经在Scikit-Learn中构建了一个管道,其中包括两个步骤:一个构建要素,第二个是RandomForestClassifier。

虽然我可以保存该管道,但是查看各个步骤以及步骤中设置的各种参数,我希望能够从结果模型中检查要素重要性。

这可能吗?

2 个答案:

答案 0 :(得分:9)

啊,是的。

您的列表标识了您要检查估算工具的步骤:

例如:

pipeline.steps[1]

返回:

('predictor',
 RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
             max_depth=None, max_features='auto', max_leaf_nodes=None,
             min_samples_leaf=1, min_samples_split=2,
             min_weight_fraction_leaf=0.0, n_estimators=50, n_jobs=2,
             oob_score=False, random_state=None, verbose=0,
             warm_start=False))

然后,您可以直接访问模型步骤:

pipeline.steps [1] [1] _ .feature_importances

答案 1 :(得分:0)

我写了一篇关于这样做的文章,您可以找到here

通常对于管道,您可以访问named_steps参数。这将为您提供管道中的每个变压器。因此,例如以下管道:

model = Pipeline(
[
    ("vectorizer", CountVectorizer()),
    ("transformer", TfidfTransformer()),
    ("classifier", classifier),
])

我们可以通过执行model.named_steps["transformer"].get_feature_names()来访问各个功能步骤,这将从TfidfTransformer返回功能名称列表。一切都很好,但实际上并不能涵盖很多用例,因为我们通常希望结合一些功能。以这个模型为例:

model = Pipeline([
("union", FeatureUnion(transformer_list=[
    ("h1", TfidfVectorizer(vocabulary={"worst": 0})),
    ("h2", TfidfVectorizer(vocabulary={"best": 0})),
    ("h3", TfidfVectorizer(vocabulary={"awful": 0})),
    ("tfidf_cls", Pipeline([
        ("vectorizer", CountVectorizer()),
        ("transformer", TfidfTransformer())
    ]
    ))
])
 ),
("classifier", classifier)])

在这里,我们使用要素联合和子管道来组合一些要素。要访问这些功能,我们需要按顺序显式调用每个命名步骤。例如,必须从内部管道中获取TF-IDF功能:

model.named_steps["union"].tranformer_list[3][1].named_steps["transformer"].get_feature_names()

这有点让人头疼,但它是可行的。通常我要做的是使用以下代码段的变体来获取它。下面的代码只是将管道/功能联合的集合视为一棵树,并在执行DFS时将feature_names结合起来。

from sklearn.pipeline import FeatureUnion, Pipeline

def get_feature_names(model, names: List[str], name: str) -> List[str]:
    """Thie method extracts the feature names in order from a Sklearn Pipeline
    
    This method only works with composed Pipelines and FeatureUnions.  It will
    pull out all names using DFS from a model.

    Args:
        model: The model we are interested in
        names: The list of names of final featurizaiton steps
        name: The current name of the step we want to evaluate.

    Returns:
        feature_names: The list of feature names extracted from the pipeline.
    """
    
    # Check if the name is one of our feature steps.  This is the base case.
    if name in names:
        # If it has the named_steps atribute it's a pipeline and we need to access the features
        if hasattr(model, "named_steps"):
            return extract_feature_names(model.named_steps[name], name)
        # Otherwise get the feature directly
        else:
            return extract_feature_names(model, name)
    elif type(model) is Pipeline:
        feature_names = []
        for name in model.named_steps.keys():
            feature_names += get_feature_names(model.named_steps[name], names, name)
        return feature_names
    elif type(model) is FeatureUnion:
        feature_names= []
        for name, new_model in model.transformer_list:
            feature_names += get_feature_names(new_model, names, name)
        return feature_names
    # If it is none of the above do not add it.
    else:
        return []

您还将需要此方法。它对单个转换(例如TfidfVectorizer等)进行操作以获取名称。在SciKit-Learn中,没有通用的get_feature_names,因此您必须对每种不同的情况都加以捏造。这是我针对大多数用例所做的合理尝试。

def extract_feature_names(model, name) -> List[str]:
  """Extracts the feature names from arbitrary sklearn models
  
  Args:
    model: The Sklearn model, transformer, clustering algorithm, etc. which we want to get named features for.
    name: The name of the current step in the pipeline we are at.

  Returns:
    The list of feature names.  If the model does not have named features it constructs feature names
by appending an index to the provided name.
  """
    if hasattr(model, "get_feature_names"):
        return model.get_feature_names()
    elif hasattr(model, "n_clusters"):
        return [f"{name}_{x}" for x in range(model.n_clusters)]
    elif hasattr(model, "n_components"):
        return [f"{name}_{x}" for x in range(model.n_components)]
    elif hasattr(model, "components_"):
        n_components = model.components_.shape[0]
        return [f"{name}_{x}" for x in range(n_components)]
    elif hasattr(model, "classes_"):
        return classes_
    else:
        return [name]
相关问题