回归逻辑的混淆矩阵

时间:2018-12-28 20:06:32

标签: r machine-learning

我正在尝试对提供的数据集进行逻辑回归 here,使用5折交叉验证。

我的目标是在数据集的“分类”列上进行预测,该列的取值为1(如果没有癌症)和2(如果患有癌症)。

这是完整的代码:

     library(ISLR)
     library(boot)
     dataCancer <- read.csv("http://archive.ics.uci.edu/ml/machine-learning-databases/00451/dataR2.csv")

     #Randomly shuffle the data
     dataCancer<-dataCancer[sample(nrow(dataCancer)),]
     #Create 5 equally size folds
     folds <- cut(seq(1,nrow(dataCancer)),breaks=5,labels=FALSE)
     #Perform 5 fold cross validation
     for(i in 1:5){
           #Segement your data by fold using the which() function 
           testIndexes <- which(folds == i)
           testData <- dataCancer[testIndexes, ]
           trainData <- dataCancer[-testIndexes, ]
           #Use the test and train data partitions however you desire...

           classification_model = glm(as.factor(Classification) ~ ., data = trainData,family = binomial)
           summary(classification_model)

           #Use the fitted model to do predictions for the test data
           model_pred_probs = predict(classification_model , testData , type = "response")
           model_predict_classification = rep(0 , length(testData))
           model_predict_classification[model_pred_probs > 0.5] = 1

           #Create the confusion matrix and compute the misclassification rate
           table(model_predict_classification , testData)
           mean(model_predict_classification != testData)
     }

最后我想寻求帮助

 table(model_predict_classification , testData)
 mean(model_predict_classification != testData)

我收到以下错误:

 Error in table(model_predict_classification, testData) : all arguments must have the same length

我不太清楚如何使用混淆矩阵。

我希望有5个错误分类率。 trainData和testData已分为5个部分。大小应等于model_predict_classification。

感谢您的帮助。

1 个答案:

答案 0 :(得分:1)

这里是一种使用caret软件包的解决方案,可以在将癌症数据分为测试和训练数据集之后对它们进行5倍交叉验证。针对测试数据和训练数据均会生成混淆矩阵。

caret::train()报告了5个保留折叠的平均准确度。可以从输出模型对象中提取每个折叠的结果。

library(caret)
data <- read.csv("http://archive.ics.uci.edu/ml/machine-learning-databases/00451/dataR2.csv")
# set classification as factor, and recode to 
# 0 = no cancer, 1 = cancer 
data$Classification <- as.factor((data$Classification - 1))
# split data into training and test, based on values of dependent variable 
trainIndex <- createDataPartition(data$Classification, p = .75,list=FALSE)
training <- data[trainIndex,]
testing <- data[-trainIndex,]
trCntl <- trainControl(method = "CV",number = 5)
glmModel <- train(Classification ~ .,data = training,trControl = trCntl,method="glm",family = "binomial")
# print the model info
summary(glmModel)
glmModel
confusionMatrix(glmModel)
# generate predictions on hold back data
trainPredicted <- predict(glmModel,testing)
# generate confusion matrix for hold back data
confusionMatrix(trainPredicted,reference=testing$Classification)

...以及输出:

> # print the model info
> > summary(glmModel)
> 
> Call: NULL
> 
> Deviance Residuals: 
>     Min       1Q   Median       3Q      Max  
> -2.1542  -0.8358   0.2605   0.8260   2.1009  
> 
> Coefficients:
>               Estimate Std. Error z value Pr(>|z|)   (Intercept) -4.4039248  3.9159157  -1.125   0.2607   Age         -0.0190241  0.0177119  -1.074   0.2828   BMI         -0.1257962  0.0749341  -1.679   0.0932 . Glucose      0.0912229  0.0389587   2.342   0.0192 * Insulin      0.0917095  0.2889870   0.317   0.7510   HOMA        -0.1820392  1.2139114  -0.150   0.8808   Leptin      -0.0207606  0.0195192  -1.064   0.2875   Adiponectin -0.0158448  0.0401506  -0.395   0.6931   Resistin     0.0419178  0.0255536   1.640   0.1009   MCP.1        0.0004672  0.0009093   0.514   0.6074  
> --- Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
> 
> (Dispersion parameter for binomial family taken to be 1)
> 
>     Null deviance: 119.675  on 86  degrees of freedom Residual deviance:  89.804  on 77  degrees of freedom AIC: 109.8
> 
> Number of Fisher Scoring iterations: 7
> 
> > glmModel Generalized Linear Model 
> 
> 87 samples  9 predictor  2 classes: '0', '1' 
> 
> No pre-processing Resampling: Cross-Validated (5 fold)  Summary of
> sample sizes: 70, 69, 70, 69, 70  Resampling results:
> 
>   Accuracy   Kappa    
>   0.7143791  0.4356231
> 
> > confusionMatrix(glmModel) Cross-Validated (5 fold) Confusion Matrix 
> 
> (entries are percentual average cell counts across resamples)
>  
>           Reference Prediction    0    1
>          0 33.3 17.2
>          1 11.5 37.9
>                               Accuracy (average) : 0.7126
> 
> > # generate predictions on hold back data
> > trainPredicted <- predict(glmModel,testing)
> > # generate confusion matrix for hold back data
> > confusionMatrix(trainPredicted,reference=testing$Classification) Confusion Matrix and Statistics
> 
>           Reference Prediction  0  1
>          0 11  2
>          1  2 14
>                                           
>                Accuracy : 0.8621          
>                  95% CI : (0.6834, 0.9611)
>     No Information Rate : 0.5517          
>     P-Value [Acc > NIR] : 0.0004078       
>                                           
>                   Kappa : 0.7212            Mcnemar's Test P-Value : 1.0000000       
>                                           
>             Sensitivity : 0.8462          
>             Specificity : 0.8750          
>          Pos Pred Value : 0.8462          
>          Neg Pred Value : 0.8750          
>              Prevalence : 0.4483          
>          Detection Rate : 0.3793              Detection Prevalence : 0.4483          
>       Balanced Accuracy : 0.8606          
>                                           
>        'Positive' Class : 0               
>                                           
> >