为什么我的验证精度比火车精度高很多,而测试精度却只有0.5?

时间:2019-04-10 07:17:47

标签: keras neural-network

我正在使用keras中的inception_v3模型进行一些图像分类,但是,在整个训练过程中,我的训练精度低于验证。从第一个时期开始,我的验证准确性就超过了0.95。我还发现火车损失比验证损失高得多。最后,测试精度为0.5,这是非常糟糕的。

起初,我的优化器是Adam,学习率等于0.00001,结果很差。然后我将其更改为SGD,学习率为0.00001,这不会对不良结果产生任何影响。我还尝试将学习率提高到0.1,但测试准确度仍在0.5左右

import numpy as np
import pandas as pd
import keras
from keras import layers
from keras.applications.inception_v3 import preprocess_input
from keras.models import Model
from keras.layers.core import Dense
from keras.layers import GlobalAveragePooling2D
from keras.optimizers import Adam, SGD, RMSprop
from keras.preprocessing.image import ImageDataGenerator
from keras.utils.np_utils import to_categorical
from keras.utils import plot_model
from keras.models import model_from_json
from sklearn.metrics import confusion_matrix
import itertools
import matplotlib.pyplot as plt
import math
import copy
import pydotplus


train_path = 'data/train'
valid_path = 'data/validation'
test_path = 'data/test'
top_model_weights_path = 'model_weigh.h5'

# number of epochs to train top model
epochs = 100
# batch size used by flow_from_directory and predict_generator
batch_size = 2

img_width, img_height = 299, 299
fc_size = 1024
nb_iv3_layers_to_freeze = 172

train_datagen = ImageDataGenerator(preprocessing_function=preprocess_input,
                                   rotation_range=30,
                                   width_shift_range=0.2,
                                   height_shift_range=0.2,
                                   shear_range=0.2,
                                   zoom_range=0.2,
                                   horizontal_flip=True)
# this is the augmentation configuration we will use for testing:
# only rescaling
valid_datagen = ImageDataGenerator(preprocessing_function=preprocess_input,
                                   rotation_range=30,
                                   width_shift_range=0.2,
                                   height_shift_range=0.2,
                                   shear_range=0.2,
                                   zoom_range=0.2,
                                   horizontal_flip=True)


train_batches = 
     train_datagen.flow_from_directory(train_path,
                                       target_size=(img_width, img_height),
                                       classes=None,
                                       class_mode='categorical',
                                       batch_size=batch_size,
                                       shuffle=True)
valid_batches = 
     valid_datagen.flow_from_directory(valid_path,
                                       target_size=(img_width,img_height),
                                       classes=None,
                                       class_mode='categorical',
                                       batch_size=batch_size,
                                       shuffle=True)
test_batches = 
    ImageDataGenerator().flow_from_directory(test_path,
                                             target_size=(img_width, 
                                                          img_height),
                                             classes=None,
                                             class_mode='categorical',
                                             batch_size=batch_size,
                                             shuffle=False)

nb_train_samples = len(train_batches.filenames)  
# get the size of the training set
nb_classes_train = len(train_batches.class_indices)  
# get the number of classes
predict_size_train = int(math.ceil(nb_train_samples / batch_size))

nb_valid_samples = len(valid_batches.filenames)
nb_classes_valid = len(valid_batches.class_indices)
predict_size_validation = int(math.ceil(nb_valid_samples / batch_size))

nb_test_samples = len(test_batches.filenames)
nb_classes_test = len(test_batches.class_indices)
predict_size_test = int(math.ceil(nb_test_samples / batch_size))


def add_new_last_layer(base_model, nb_classes):
    x = base_model.output
    x = GlobalAveragePooling2D()(x)
    x = Dense(fc_size, activation='relu')(x)
    pred = Dense(nb_classes, activation='softmax')(x)
    model = Model(input=base_model.input, output=pred)
    return model


# freeze base_model layer in order to get the bottleneck feature
def setup_to_transfer_learn(model, base_model):
    for layer in base_model.layers:
        layer.trainable = False
    model.compile(optimizer=Adam(lr=0.00001),
                  loss='categorical_crossentropy',
                  metrics=['accuracy'])


base_model = keras.applications.inception_v3.InceptionV3(weights='imagenet', include_top=False)
model = add_new_last_layer(base_model, nb_classes_train)
setup_to_transfer_learn(model, base_model)

model.summary()

train_labels = train_batches.classes
train_labels = to_categorical(train_labels, num_classes=nb_classes_train)

validation_labels = valid_batches.classes
validation_labels = to_categorical(validation_labels, num_classes=nb_classes_train)

history = model.fit_generator(train_batches,
                              epochs=epochs,
                              steps_per_epoch=nb_train_samples // batch_size,
                              validation_data=valid_batches,
                              validation_steps=nb_valid_samples // batch_size,
                              class_weight='auto')

# save model to json
model_json = model.to_json()
with open("model.json", "w") as json_file:
    json_file.write(model_json)
# serialize model to HDF5
model.save_weights(top_model_weights_path)
print("Saved model to disk")

# model visualization
plot_model(model,
           show_shapes=True,
           show_layer_names=True,
           to_file='model.png')

(eval_loss, eval_accuracy) = model.evaluate_generator(
    valid_batches,
    steps=nb_valid_samples // batch_size,
    verbose=1)
print("[INFO] evaluate accuracy: {:.2f}%".format(eval_accuracy * 100))
print("[INFO] evaluate loss: {}".format(eval_loss))

test_batches.reset()
predictions = model.predict_generator(test_batches,
                                      steps=nb_test_samples / batch_size,
                                      verbose=0)
# print(predictions)

predicted_class_indices = np.argmax(predictions, axis=1)
# print(predicted_class_indices)

labels = train_batches.class_indices
labels = dict((v, k) for k, v in labels.items())
final_predictions = [labels[k] for k in predicted_class_indices]
# print(final_predictions)

# save as csv file
filenames = test_batches.filenames
results = pd.DataFrame({"Filename": filenames,
                        "Predictions": final_predictions})
results.to_csv("results.csv", index=False)

# evaluation test result
(test_loss, test_accuracy) = model.evaluate_generator(
    test_batches,
    steps=nb_train_samples // batch_size,
    verbose=1)
print("[INFO] test accuracy: {:.2f}%".format(test_accuracy * 100))
print("[INFO] test loss: {}".format(test_loss))

这里是培训过程的简短摘要:

Epoch 1/100
2000/2000 [==============================] - 146s 73ms/step - loss: 0.4941 - acc: 0.7465 - val_loss: 0.1612 - val_acc: 0.9770
Epoch 2/100
2000/2000 [==============================] - 140s 70ms/step - loss: 0.4505 - acc: 0.7725 - val_loss: 0.1394 - val_acc: 0.9765
Epoch 3/100
2000/2000 [==============================] - 139s 70ms/step - loss: 0.4505 - acc: 0.7605 - val_loss: 0.1643 - val_acc: 0.9560
......
Epoch 98/100
2000/2000 [==============================] - 141s 71ms/step - loss: 0.1348 - acc: 0.9467 - val_loss: 0.0639 - val_acc: 0.9820
Epoch 99/100
2000/2000 [==============================] - 140s 70ms/step - loss: 0.1495 - acc: 0.9365 - val_loss: 0.0780 - val_acc: 0.9770
Epoch 100/100
2000/2000 [==============================] - 138s 69ms/step - loss: 0.1401 - acc: 0.9458 - val_loss: 0.0471 - val_acc: 0.9890

这是我得到的结果:

[INFO] evaluate accuracy: 98.55%
[INFO] evaluate loss: 0.05201659869024259
2000/2000 [==============================] - 47s 23ms/step
[INFO] test accuracy: 51.70%
[INFO] test loss: 7.737395915810134

我希望有人能帮助我解决这个问题。

2 个答案:

答案 0 :(得分:0)

就像现在的代码一样,您不会冻结模型的各个层以进行迁移学习。在setup_to_transfer_learn中,您要冻结base_model中的层,然后编译新模型(包含来自基础模型的层),但实际上并没有冻结新模型。只需更改setup_to_transfer_learn

def setup_to_transfer_learn(model):
    for layer in model.layers[:-3]:    # since you added three new layers (which should not freeze)
        layer.trainable = False
    model.compile(optimizer=Adam(lr=0.00001),
              loss='categorical_crossentropy',
              metrics=['accuracy'])

然后像这样调用函数:

model = add_new_last_layer(base_model, nb_classes_train)
setup_to_transfer_learn(model)

调用model.summary()

时,您应该在可训练参数的数量上看到很大的不同

答案 1 :(得分:0)

最后,我解决了这个问题。我忘记对测试数据进行图像预处理。添加完之后,一切正常。 我改变了这个:

test_batches = ImageDataGenerator().flow_from_directory(test_path,
                                             target_size=(img_width, img_height),
                                             classes=None,
                                             class_mode='categorical',
                                             batch_size=batch_size,
                                             shuffle=False)

对此:

test_datagen = ImageDataGenerator(preprocessing_function=preprocess_input)

test_batches = test_datagen.flow_from_directory(test_path,
                                                target_size=(img_width, img_height),
                                                classes=None,
                                                class_mode='categorical',
                                                batch_size=batch_size,
                                                shuffle=False)

测试精度为0.98,测试损失为0.06。