Scikit-learn:如何获得真阳性,真阴性,假阳性和假阴性

时间:2015-07-09 17:19:03

标签: python machine-learning scikit-learn classification supervised-learning

我的问题:

我有一个大型JSON文件的数据集。我读了它并将其存储在kfold变量中。

接下来,我预先处理它 - 以便能够使用它。

完成后我开始分类:

  1. 我使用True Positive(TP)交叉验证方法来获得平均值 准确性和训练分类器。
  2. 我做出预测并获得准确性&该折叠的混淆矩阵。
  3. 在此之后,我想获得True Negative(TN)False Positive(FP)False Negative(FN)trainList #It is a list with all the data of my dataset in JSON form labelList #It is a list with all the labels of my data 值。我将使用这些参数来获得敏感度特异性
  4. 最后,我会用它来放入HTML,以便显示每个标签的TP图表。

    代码:

    我目前的变量:

    #I transform the data from JSON form to a numerical one
    X=vec.fit_transform(trainList)
    
    #I scale the matrix (don't know why but without it, it makes an error)
    X=preprocessing.scale(X.toarray())
    
    #I generate a KFold in order to make cross validation
    kf = KFold(len(X), n_folds=10, indices=True, shuffle=True, random_state=1)
    
    #I start the cross validation
    for train_indices, test_indices in kf:
        X_train=[X[ii] for ii in train_indices]
        X_test=[X[ii] for ii in test_indices]
        y_train=[listaLabels[ii] for ii in train_indices]
        y_test=[listaLabels[ii] for ii in test_indices]
    
        #I train the classifier
        trained=qda.fit(X_train,y_train)
    
        #I make the predictions
        predicted=qda.predict(X_test)
    
        #I obtain the accuracy of this fold
        ac=accuracy_score(predicted,y_test)
    
        #I obtain the confusion matrix
        cm=confusion_matrix(y_test, predicted)
    
        #I should calculate the TP,TN, FP and FN 
        #I don't know how to continue
    

    方法的大部分内容:

    '\n'

18 个答案:

答案 0 :(得分:74)

对于多类案例,您可以从混淆矩阵中找到所需的一切。例如,如果您的混淆矩阵如下所示:

confusion matrix

然后,每个班级你要找的是这样的:

overlay

使用pandas / numpy,您可以同时为所有类执行此操作:

FP = confusion_matrix.sum(axis=0) - np.diag(confusion_matrix)  
FN = confusion_matrix.sum(axis=1) - np.diag(confusion_matrix)
TP = np.diag(confusion_matrix)
TN = confusion_matrix.values.sum() - (FP + FN + TP)

# Sensitivity, hit rate, recall, or true positive rate
TPR = TP/(TP+FN)
# Specificity or true negative rate
TNR = TN/(TN+FP) 
# Precision or positive predictive value
PPV = TP/(TP+FP)
# Negative predictive value
NPV = TN/(TN+FN)
# Fall out or false positive rate
FPR = FP/(FP+TN)
# False negative rate
FNR = FN/(TP+FN)
# False discovery rate
FDR = FP/(TP+FP)

# Overall accuracy
ACC = (TP+TN)/(TP+FP+FN+TN)

答案 1 :(得分:20)

如果您有两个具有预测值和实际值的列表;就像你看到的那样,你可以把它们传递给一个能用这样的东西来计算TP,FP,TN,FN的函数:

def perf_measure(y_actual, y_hat):
    TP = 0
    FP = 0
    TN = 0
    FN = 0

    for i in range(len(y_hat)): 
        if y_actual[i]==y_hat[i]==1:
           TP += 1
        if y_hat[i]==1 and y_actual[i]!=y_hat[i]:
           FP += 1
        if y_actual[i]==y_hat[i]==0:
           TN += 1
        if y_hat[i]==0 and y_actual[i]!=y_hat[i]:
           FN += 1

return(TP, FP, TN, FN)

从这里开始,我认为您将能够计算出您感兴趣的比率,以及其他性能指标,如特异性和敏感性。

答案 2 :(得分:17)

根据scikit-learn文档,

http://scikit-learn.org/stable/modules/generated/sklearn.metrics.confusion_matrix.html#sklearn.metrics.confusion_matrix

根据定义,混淆矩阵C使得C [i,j]等于已知在组i中但预测在组j中的观测数。

因此在二元分类中,真阴性的计数为C [0,0],假阴性为C [1,0],真阳性为C [1,1],假阳性为C [0,1]

CM = confusion_matrix(y_true, y_pred)

TN = CM[0][0]
FN = CM[1][0]
TP = CM[1][1]
FP = CM[0][1]

答案 3 :(得分:17)

您可以从混淆矩阵中获取所有参数。 混淆矩阵的结构(2X2矩阵)如下

TP|FP
FN|TN

所以

TP = cm[0][0]
FP = cm[0][1]
FN = cm[1][0]
TN = cm[1][1]

https://en.wikipedia.org/wiki/Confusion_matrix

的更多详情

答案 4 :(得分:5)

在scikit-learn'metrics'库中有一个confusion_matrix方法,可以为您提供所需的输出。

您可以使用所需的任何分类器。这里我以KNeighbors为例。

from sklearn import metrics, neighbors

clf = neighbors.KNeighborsClassifier()

X_test = ...
y_test = ...

expected = y_test
predicted = clf.predict(X_test)

conf_matrix = metrics.confusion_matrix(expected, predicted)

>>> print conf_matrix
>>>  [[1403   87]
     [  56 3159]]

文档:http://scikit-learn.org/stable/modules/generated/sklearn.metrics.confusion_matrix.html#sklearn.metrics.confusion_matrix

答案 5 :(得分:2)

您可以尝试sklearn.metrics.classification_report,如下所示:

import sklearn
y_true = [1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0]
y_pred = [1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0]

print sklearn.metrics.classification_report(y_true, y_pred)

输出:

         precision    recall  f1-score   support

      0       0.80      0.57      0.67         7
      1       0.50      0.75      0.60         4

      avg / total       0.69      0.64      0.64        11

答案 6 :(得分:2)

从混淆矩阵中得出真实正词等的一种底线是ravel

from sklearn.metrics import confusion_matrix

y_true = [1, 1, 0, 0]
y_pred = [1, 0, 1, 0]   

tn, fp, fn, tp = confusion_matrix(y_true, y_pred).ravel()
print(tn, fp, fn, tp)  # 1 1 1 1

答案 7 :(得分:2)

我写了一个只使用numpy的版本。 我希望它可以帮助你。

import numpy as np

def perf_metrics_2X2(yobs, yhat):
    """
    Returns the specificity, sensitivity, positive predictive value, and 
    negative predictive value 
    of a 2X2 table.

    where:
    0 = negative case
    1 = positive case

    Parameters
    ----------
    yobs :  array of positive and negative ``observed`` cases
    yhat : array of positive and negative ``predicted`` cases

    Returns
    -------
    sensitivity  = TP / (TP+FN)
    specificity  = TN / (TN+FP)
    pos_pred_val = TP/ (TP+FP)
    neg_pred_val = TN/ (TN+FN)

    Author: Julio Cardenas-Rodriguez
    """
    TP = np.sum(  yobs[yobs==1] == yhat[yobs==1] )
    TN = np.sum(  yobs[yobs==0] == yhat[yobs==0] )
    FP = np.sum(  yobs[yobs==1] == yhat[yobs==0] )
    FN = np.sum(  yobs[yobs==0] == yhat[yobs==1] )

    sensitivity  = TP / (TP+FN)
    specificity  = TN / (TN+FP)
    pos_pred_val = TP/ (TP+FP)
    neg_pred_val = TN/ (TN+FN)

    return sensitivity, specificity, pos_pred_val, neg_pred_val

答案 8 :(得分:1)

如果分类器中有多个类,则可能需要在该部分使用pandas-ml。 pandas-ml的混淆矩阵提供了更详细的信息。 check that

RESULT

答案 9 :(得分:1)

以防万一有人在 MULTI-CLASS 示例

中寻找相同的内容
def perf_measure(y_actual, y_pred):
    class_id = set(y_actual).union(set(y_pred))
    TP = []
    FP = []
    TN = []
    FN = []

    for index ,_id in enumerate(class_id):
        TP.append(0)
        FP.append(0)
        TN.append(0)
        FN.append(0)
        for i in range(len(y_pred)):
            if y_actual[i] == y_pred[i] == _id:
                TP[index] += 1
            if y_pred[i] == _id and y_actual[i] != y_pred[i]:
                FP[index] += 1
            if y_actual[i] == y_pred[i] != _id:
                TN[index] += 1
            if y_pred[i] != _id and y_actual[i] != y_pred[i]:
                FN[index] += 1


    return class_id,TP, FP, TN, FN

答案 10 :(得分:1)

def getTPFPTNFN(y_true, y_pred):
    TP, FP, TN, FN = 0, 0, 0, 0
    for s_true, s_pred in zip (y_true, y_pred):
        if s_true == 1:
            if s_pred == 1: 
                TP += 1
            else:
                FN += 1
        else:
            if s_pred == 0:
                TN += 1
            else:
                FP += 1
    return TP, FP, TN, FN

答案 11 :(得分:0)

我认为这两个答案都不完全正确。例如,假设我们有以下数组;
y_actual = [1,1,0,0,0,1,0,1,0,0,0]

y_predic = [1,1,1,0,0,0,1,1,0,1,0]

如果我们手动计算FP,FN,TP和TN值,它们应如下所示:

FP:3 FN:1 TP:3 TN:4

但是,如果我们使用第一个答案,结果如下:

FP:1 FN:3 TP:3 TN:4

它们不正确,因为在第一个答案中,假阳性应该是实际为0,但预测是1,而不是相反。假阴性也是一样。

而且,如果我们使用第二个答案,结果计算如下:

FP:3 FN:1 TP:4 TN:3

真正的正面和真正的负数不正确,它们应该相反。

我的计算是否正确?如果我遗失了什么,请告诉我。

答案 12 :(得分:0)

我尝试了一些答案,发现它们不起作用。

这对我有用:

from sklearn.metrics import classification_report

print(classification_report(y_test, predicted)) 

答案 13 :(得分:0)

在scikit 0.22版中,您可以这样做

from sklearn.metrics import multilabel_confusion_matrix

y_true = ["cat", "ant", "cat", "cat", "ant", "bird"]
y_pred = ["ant", "ant", "cat", "cat", "ant", "cat"]

mcm = multilabel_confusion_matrix(y_true, y_pred,labels=["ant", "bird", "cat"])

tn = mcm[:, 0, 0]
tp = mcm[:, 1, 1]
fn = mcm[:, 1, 0]
fp = mcm[:, 0, 1]

答案 14 :(得分:0)

#False positive cases
train = pd.merge(X_train, y_train,left_index=True, right_index=True)
y_train_pred = pd.DataFrame(y_train_pred)
y_train_pred.rename(columns={0 :'Predicted'}, inplace=True )
train = train.reset_index(drop=True).merge(y_train_pred.reset_index(drop=True),
left_index=True,right_index=True)
train['FP'] = np.where((train['Banknote']=="Forged") & (train['Predicted']=="Genuine"),1,0)
train[train.FP != 0]

答案 15 :(得分:0)

#FalseNegatives

test = pd.merge(Variables_test, Banknote_test,left_index=True, right_index=True)
Banknote_test_pred = pd.DataFrame(banknote_test_pred)
Banknote_test_pred.rename(columns={0 :'Predicted'}, inplace=True )
test = test.reset_index(drop=True).merge(Banknote_test_pred.reset_index(drop=True), left_index=True, right_index=True)
test['FN'] = np.where((test['Banknote']=="Genuine") & (test['Predicted']=="Forged"),1,0)
test[test.FN != 0]

答案 16 :(得分:0)

到目前为止给出的答案都没有对我有用,因为我有时最终会得到一个只有一个条目的混淆矩阵。以下代码可以缓解此问题:

from sklearn.metrics import confusion_matrix
CM = confusion_matrix(y, y_hat)
            
try:
    TN = CM[0][0]
except IndexError:
    TN = 0
try:
    FN = CM[1][0]
except IndexError:
    FN = 0
try:
    TP = CM[1][1]
except IndexError:
    TP = 0
try:
    FP = CM[0][1]
except IndexError:
    FP = 0

请注意,“y”是真实值,“y_hat”是预测值。

答案 17 :(得分:-1)

这里修复了invoketheshell的错误代码(目前显示为已接受的答案):

def performance_measure(y_actual, y_hat):
    TP = 0
    FP = 0
    TN = 0
    FN = 0

    for i in range(len(y_hat)): 
        if y_actual[i] == y_hat[i]==1:
            TP += 1
        if y_hat[i] == 1 and y_actual[i] == 0:
            FP += 1
        if y_hat[i] == y_actual[i] == 0:
            TN +=1
        if y_hat[i] == 0 and y_actual[i] == 1:
            FN +=1

    return(TP, FP, TN, FN)