训练精度很高,训练过程中损失少,但分类不好

时间:2019-06-12 14:27:45

标签: python tensorflow machine-learning keras deep-learning

我在属于3类(两种动物,和一组风景图像)的图像描述符上训练了一个神经网络。这些描述符已使用VGG16进行了预先计算(没有最后一个完全连接的层),并与其他分类器(SVM)一起获得了良好的结果。

这是我的模特

model = keras.models.Sequential()
model.add(keras.layers.Dense(256, input_shape = (25088,), activation = 'relu'))
model.add(keras.layers.Dropout(0.5))
model.add(keras.layers.Dense(len(classes), activation = 'softmax'))
model.compile(optimizer = 'rmsprop', loss = 'categorical_crossentropy', metrics = ['accuracy'])

我这样训练它:

model.fit(
    X,
    y,
    epochs = 50,
    batch_size = 32,
    validation_split = 0.3,
    class_weight = class_weights
)

这三个类别的数据集不平衡:类别0有2135个项目,类别1有1472,类别2有760。我使用class_weights来补偿:

class_weights = {c: len(y) / np.sum(y[:,c] == 1.) for c in range(y.shape[1])}

其值为{0: 2.045433255269321, 1: 2.9667119565217392, 2: 5.746052631578947}

训练过程中的测试准确性和损失都非常好(在验证集上并没有那么多):

Epoch 1/50
3056/3056 [==============================] - 16s 5ms/step - loss: 3.1452 - acc: 0.9107 - val_loss: 54.5996 - val_acc: 0.3997
Epoch 2/50
3056/3056 [==============================] - 2s 523us/step - loss: 1.5053 - acc: 0.9627 - val_loss: 53.9704 - val_acc: 0.4134
Epoch 3/50
3056/3056 [==============================] - 2s 521us/step - loss: 1.3939 - acc: 0.9607 - val_loss: 54.4188 - val_acc: 0.4043
Epoch 4/50
3056/3056 [==============================] - 2s 522us/step - loss: 1.5265 - acc: 0.9545 - val_loss: 53.7266 - val_acc: 0.4195
Epoch 5/50
3056/3056 [==============================] - 2s 522us/step - loss: 1.4650 - acc: 0.9562 - val_loss: 54.0863 - val_acc: 0.4111
Epoch 6/50
3056/3056 [==============================] - 2s 521us/step - loss: 1.3557 - acc: 0.9607 - val_loss: 53.8348 - val_acc: 0.4172
Epoch 7/50
3056/3056 [==============================] - 2s 520us/step - loss: 1.0602 - acc: 0.9699 - val_loss: 54.1266 - val_acc: 0.4104
Epoch 8/50
3056/3056 [==============================] - 2s 526us/step - loss: 0.8097 - acc: 0.9781 - val_loss: 55.3352 - val_acc: 0.3852
Epoch 9/50
3056/3056 [==============================] - 2s 521us/step - loss: 0.8912 - acc: 0.9741 - val_loss: 53.8360 - val_acc: 0.4172
Epoch 10/50
3056/3056 [==============================] - 2s 517us/step - loss: 0.9512 - acc: 0.9732 - val_loss: 54.1430 - val_acc: 0.4096
Epoch 11/50
3056/3056 [==============================] - 2s 519us/step - loss: 0.9200 - acc: 0.9745 - val_loss: 54.4828 - val_acc: 0.4027
Epoch 12/50
3056/3056 [==============================] - 2s 526us/step - loss: 0.7612 - acc: 0.9797 - val_loss: 53.9240 - val_acc: 0.4150
Epoch 13/50
3056/3056 [==============================] - 2s 522us/step - loss: 0.6478 - acc: 0.9820 - val_loss: 53.9454 - val_acc: 0.4150
Epoch 14/50
3056/3056 [==============================] - 2s 525us/step - loss: 0.9011 - acc: 0.9764 - val_loss: 54.3105 - val_acc: 0.4073
Epoch 15/50
3056/3056 [==============================] - 2s 517us/step - loss: 0.8652 - acc: 0.9787 - val_loss: 54.0913 - val_acc: 0.4119
Epoch 16/50
3056/3056 [==============================] - 2s 522us/step - loss: 0.7115 - acc: 0.9800 - val_loss: 54.0184 - val_acc: 0.4134
Epoch 17/50
3056/3056 [==============================] - 2s 518us/step - loss: 0.6954 - acc: 0.9804 - val_loss: 53.8322 - val_acc: 0.4172
Epoch 18/50
3056/3056 [==============================] - 2s 524us/step - loss: 0.7845 - acc: 0.9794 - val_loss: 55.1453 - val_acc: 0.3883
Epoch 19/50
3056/3056 [==============================] - 2s 520us/step - loss: 0.8089 - acc: 0.9777 - val_loss: 54.0184 - val_acc: 0.4134
Epoch 20/50
3056/3056 [==============================] - 2s 524us/step - loss: 0.6779 - acc: 0.9820 - val_loss: 54.0726 - val_acc: 0.4119
Epoch 21/50
3056/3056 [==============================] - 2s 517us/step - loss: 0.5939 - acc: 0.9840 - val_loss: 54.3102 - val_acc: 0.4073
Epoch 22/50
3056/3056 [==============================] - 2s 518us/step - loss: 0.6781 - acc: 0.9810 - val_loss: 54.1643 - val_acc: 0.4104
Epoch 23/50
3056/3056 [==============================] - 2s 514us/step - loss: 0.6912 - acc: 0.9804 - val_loss: 53.9454 - val_acc: 0.4150
Epoch 24/50
3056/3056 [==============================] - 2s 521us/step - loss: 0.6296 - acc: 0.9830 - val_loss: 54.0184 - val_acc: 0.4134
Epoch 25/50
3056/3056 [==============================] - 2s 521us/step - loss: 0.8910 - acc: 0.9748 - val_loss: 55.4755 - val_acc: 0.3814
Epoch 26/50
3056/3056 [==============================] - 2s 522us/step - loss: 0.7642 - acc: 0.9794 - val_loss: 54.3102 - val_acc: 0.4073
Epoch 27/50
3056/3056 [==============================] - 2s 519us/step - loss: 0.6787 - acc: 0.9827 - val_loss: 54.3102 - val_acc: 0.4073
Epoch 28/50
3056/3056 [==============================] - 2s 521us/step - loss: 0.6762 - acc: 0.9804 - val_loss: 53.9819 - val_acc: 0.4142
Epoch 29/50
3056/3056 [==============================] - 2s 519us/step - loss: 0.6418 - acc: 0.9823 - val_loss: 54.1996 - val_acc: 0.4096
Epoch 30/50
3056/3056 [==============================] - 2s 524us/step - loss: 0.6038 - acc: 0.9833 - val_loss: 55.0238 - val_acc: 0.3921
Epoch 31/50
3056/3056 [==============================] - 2s 524us/step - loss: 0.6223 - acc: 0.9836 - val_loss: 53.8964 - val_acc: 0.4150
Epoch 32/50
3056/3056 [==============================] - 2s 523us/step - loss: 0.6354 - acc: 0.9830 - val_loss: 54.3212 - val_acc: 0.4058
Epoch 33/50
3056/3056 [==============================] - 2s 561us/step - loss: 0.6124 - acc: 0.9840 - val_loss: 54.4909 - val_acc: 0.4035
Epoch 34/50
3056/3056 [==============================] - 2s 539us/step - loss: 0.5937 - acc: 0.9846 - val_loss: 53.9819 - val_acc: 0.4142
Epoch 35/50
3056/3056 [==============================] - 2s 524us/step - loss: 0.4993 - acc: 0.9849 - val_loss: 53.9906 - val_acc: 0.4134
Epoch 36/50
3056/3056 [==============================] - 2s 525us/step - loss: 0.5461 - acc: 0.9846 - val_loss: 53.8360 - val_acc: 0.4172
Epoch 37/50
3056/3056 [==============================] - 2s 530us/step - loss: 0.4849 - acc: 0.9859 - val_loss: 54.0580 - val_acc: 0.4119
Epoch 38/50
3056/3056 [==============================] - 2s 527us/step - loss: 0.4078 - acc: 0.9882 - val_loss: 53.9454 - val_acc: 0.4150
Epoch 39/50
3056/3056 [==============================] - 2s 526us/step - loss: 0.5824 - acc: 0.9840 - val_loss: 54.4196 - val_acc: 0.4050
Epoch 40/50
3056/3056 [==============================] - 2s 525us/step - loss: 0.4924 - acc: 0.9863 - val_loss: 54.3267 - val_acc: 0.4058
Epoch 41/50
3056/3056 [==============================] - 2s 515us/step - loss: 0.4689 - acc: 0.9876 - val_loss: 53.8725 - val_acc: 0.4165
Epoch 42/50
3056/3056 [==============================] - 2s 516us/step - loss: 0.5954 - acc: 0.9853 - val_loss: 54.4130 - val_acc: 0.4043
Epoch 43/50
3056/3056 [==============================] - 2s 521us/step - loss: 0.5741 - acc: 0.9849 - val_loss: 53.9755 - val_acc: 0.4142
Epoch 44/50
3056/3056 [==============================] - 2s 535us/step - loss: 0.4941 - acc: 0.9856 - val_loss: 53.7995 - val_acc: 0.4180
Epoch 45/50
3056/3056 [==============================] - 2s 528us/step - loss: 0.5669 - acc: 0.9827 - val_loss: 53.8360 - val_acc: 0.4172
Epoch 46/50
3056/3056 [==============================] - 2s 528us/step - loss: 0.4975 - acc: 0.9856 - val_loss: 54.0184 - val_acc: 0.4134
Epoch 47/50
3056/3056 [==============================] - 2s 533us/step - loss: 0.5870 - acc: 0.9827 - val_loss: 53.9454 - val_acc: 0.4150
Epoch 48/50
3056/3056 [==============================] - 2s 536us/step - loss: 0.4608 - acc: 0.9863 - val_loss: 53.9089 - val_acc: 0.4157
Epoch 49/50
3056/3056 [==============================] - 2s 554us/step - loss: 0.9252 - acc: 0.9777 - val_loss: 54.1243 - val_acc: 0.4104
Epoch 50/50
3056/3056 [==============================] - 2s 576us/step - loss: 0.4731 - acc: 0.9876 - val_loss: 54.2266 - val_acc: 0.4088

但是,当我在一组24张图像上测试该模型(来自0类的12张图像和来自2类的12张图像)时,结果令人不满意。这些是模型为0类图像提供的概率:

[[1. 0. 0.]]
[[1. 0. 0.]]
[[1. 0. 0.]]
[[1. 0. 0.]]
[[1. 0. 0.]]
[[1. 0. 0.]]
[[1. 0. 0.]]
[[1. 0. 0.]]
[[1. 0. 0.]]
[[1. 0. 0.]]
[[1. 0. 0.]]
[[1. 0. 0.]]

...以及2类图片:

[[1. 0. 0.]]
[[1. 0. 0.]]
[[1. 0. 0.]]
[[1. 0. 0.]]
[[1. 0. 0.]]
[[1. 0. 0.]]
[[1. 0. 0.]]
[[1.0000000e+00 1.2065205e-22 0.0000000e+00]]
[[1. 0. 0.]]
[[1. 0. 0.]]
[[1. 0. 0.]]
[[1. 0. 0.]]

该模型似乎非常偏向于类0。这使我认为我没有正确使用class_weight

这种偏见从何而来?

1 个答案:

答案 0 :(得分:1)

假设您使用了一些数据进行验证(在培训期间),我会说您过度拟合。

您的vall_acc始终保持在40%左右,这甚至比您在验证集中应拥有的1类图像的数量还要低。

3056/3056 [==============================] - 2s 576us/step - loss: 0.4731 - acc: 0.9876 - val_loss: 54.2266 - val_acc: 0.4088

换句话说,您的网络正在存储您的训练数据。除其他外,如果您没有足够的数据或网络太复杂,则可能会发生这种情况。

您是否随机选择了验证数据和测试数据?因为如果您不这样做,您可能不会知道这些培训数据和测试数据之间的差异。