Scikit中的交叉验证与网格搜索学习

时间:2017-07-24 09:36:10

标签: python scikit-learn cross-validation grid-search

我正在使用sklearn.model_selection.GridSearchCVsklearn.model_selection.cross_val_score,在这样做时,我遇到了意想不到的结果。

在我的示例中,我使用以下导入:

from sklearn.datasets import make_classification
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.svm import LinearSVC
from sklearn.model_selection import cross_val_score
from sklearn.metrics import make_scorer
from sklearn.metrics import recall_score
from sklearn.model_selection import GridSearchCV
import numpy as np

首先,我创建一个随机数据集:

X, y = make_classification(n_samples=1000, n_features=20, random_state=42)

接下来,我定义管道“generator”:

def my_pipeline(C=None):
    if C is None:
        return Pipeline(
            [
                ('step1', StandardScaler()),
                ('clf', LinearSVC(random_state=42))
            ])
    else:
        return Pipeline(
            [
                ('step1', StandardScaler()),
                ('clf', LinearSVC(C=C, random_state=42))
            ])        

接下来,我设置了几个C来测试:

Cs = [0.01, 0.1, 1, 2, 5, 10, 50, 100]

最后,我想检查一下可以获得的最大recall_score是多少。一次,我使用cross_val_score并直接使用GridSearchCV

np.max(
    [
        np.mean(
            cross_val_score(my_pipeline(C=c), X, y,
                            cv=3, 
                            scoring=make_scorer(recall_score)
    )) for c in Cs])

GridSearchCV(
    my_pipeline(),
    {
        'clf__C': Cs
    },
    scoring=make_scorer(recall_score),
    cv=3
).fit(X, y).best_score_)

在我的示例中,前者产生0.85997883750571147而后者产生0.85999999999999999。我期待价值是一样的。我错过了什么?

我把它全部放在gist中。

修改:修复cv。我将cv=3替换为StratifiedKFold(n_splits=3, random_state=42),结果没有变化。事实上,似乎cv不影响结果。

1 个答案:

答案 0 :(得分:1)

对我而言,它看起来像是一个精确的问题。如果您查看完整的分数列表,那么对于cross_val_score,您将获得以下内容:

[0.85193468484717316,
 0.85394271697568724,
 0.85995478921674717,
 0.85995478921674717,
 0.8579467570882332,
 0.86195079720077905,
 0.81404660558401265,
 0.82201861337565829]

GridSearchCV您获得以下

[mean: 0.85200, std: 0.02736, params: {'clf__C': 0.01},
 mean: 0.85400, std: 0.02249, params: {'clf__C': 0.1},
 mean: 0.86000, std: 0.01759, params: {'clf__C': 1},
 mean: 0.86000, std: 0.01759, params: {'clf__C': 2},
 mean: 0.85800, std: 0.02020, params: {'clf__C': 5},
 mean: 0.86200, std: 0.02275, params: {'clf__C': 10},
 mean: 0.81400, std: 0.01916, params: {'clf__C': 50},
 mean: 0.82200, std: 0.02296, params: {'clf__C': 100}]

因此,每对相应的分数在几乎相同的情况下,达到小的精度差异(似乎GridSearchCV中的分数是四舍五入的。)