验证准确性不会增加培训ResNet50

时间:2018-10-28 14:07:33

标签: machine-learning keras deep-learning

我正在使用ResNet50模型进行微调,以使用数据引证进行人脸识别,但是观察到模型的准确性正在提高,但是从一开始的验证准确性并没有提高,我不知道哪里出了问题,请查看我的代码。

我尝试操纵添加的顶层,但没有帮助。

import os
os.environ['KERAS_BACKEND'] = 'tensorflow'
from keras.applications import ResNet50
from keras.models import Sequential
from keras.layers import Dense, Flatten, GlobalAveragePooling2D,Input,Dropout

num_classes = 13

base = ResNet50(include_top=False, weights='resnet50_weights_tf_dim_ordering_tf_kernels_notop.h5',input_tensor=Input(shape=(100,100,3)))
from keras.models import Model

x = base.output

#x = GlobalAveragePooling2D()(x)

x = Flatten()(x)

x = Dense(1024, activation = 'relu')(x)

x = Dropout(0.5)(x)

predictions = Dense(13, activation='softmax')(x)

model = Model(inputs=base.input, outputs=predictions)

for layers in base.layers:
    layers.trainable= False

model.compile(optimizer='sgd', loss='categorical_crossentropy', metrics=['accuracy'])
from keras.preprocessing.image import ImageDataGenerator

train_generator = ImageDataGenerator(featurewise_center=True,
                                rotation_range=20,
                                rescale=1./255,
                                shear_range=0.2,
                                zoom_range=0.2,
                                width_shift_range=0.2,
                                height_shift_range=0.2,
                                horizontal_flip=True)

test_generator = ImageDataGenerator(rescale=1./255)

from sklearn.model_selection import train_test_split

x_train,x_test,y_train,y_test = train_test_split(image,label,test_size=0.2,shuffle=True,random_state=0)

train_generator.fit(x_train)
test_generator.fit(x_test)

model.fit_generator(train_generator.flow(x_train,y_train,batch_size=32),
                       steps_per_epoch =10,epochs=50, 
                    validation_data=test_generator.flow(x_test,y_test))

输出:

Epoch 19/50
10/10 [==============================] - 105s 10s/step - loss: 1.9387 - acc: 0.3803 - val_loss: 2.6820 - val_acc: 0.0709
Epoch 20/50
10/10 [==============================] - 107s 11s/step - loss: 2.0725 - acc: 0.3230 - val_loss: 2.6689 - val_acc: 0.0709
Epoch 21/50
10/10 [==============================] - 103s 10s/step - loss: 1.8884 - acc: 0.3375 - val_loss: 2.6677 - val_acc: 0.0709
Epoch 22/50
10/10 [==============================] - 95s 10s/step - loss: 1.8265 - acc: 0.4051 - val_loss: 2.6799 - val_acc: 0.0709
Epoch 23/50
10/10 [==============================] - 100s 10s/step - loss: 1.8346 - acc: 0.3812 - val_loss: 2.6929 - val_acc: 0.0709
Epoch 24/50
10/10 [==============================] - 102s 10s/step - loss: 1.9547 - acc: 0.3352 - val_loss: 2.6952 - val_acc: 0.0709
Epoch 25/50
10/10 [==============================] - 104s 10s/step - loss: 1.9472 - acc: 0.3281 - val_loss: 2.7168 - val_acc: 0.0709
Epoch 26/50
10/10 [==============================] - 103s 10s/step - loss: 1.8818 - acc: 0.4063 - val_loss: 2.7071 - val_acc: 0.0709
Epoch 27/50
10/10 [==============================] - 106s 11s/step - loss: 1.8053 - acc: 0.4000 - val_loss: 2.7059 - val_acc: 0.0709
Epoch 28/50
10/10 [==============================] - 104s 10s/step - loss: 1.9601 - acc: 0.3493 - val_loss: 2.7104 - val_acc: 0.0709

1 个答案:

答案 0 :(得分:1)

之所以发生这种情况,是因为我只是直接添加了完全连接的层而没有先对其进行培训,如keras博客中所述, https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html

  

为了执行微调,所有层都应从经过适当训练的权重开始:例如,您不应在预先训练的卷积基础上拍打随机初始化的完全连接的网络。这是因为由随机初始化的权重触发的大梯度更新将破坏卷积基础中的学习权重。在我们的案例中,这就是为什么我们首先训练顶级分类器,然后才开始对其微调卷积权重的原因。

所以答案是首先分别训练顶级模型,然后创建一个具有ResNet50模型的新模型,该模型的权重,顶级模型及其权重都位于resnet模型(基础模型)的顶部,然后通过冻结先对其进行训练基本模型(ResNet50)和基础模型的最后一层。