model.save_weights和model.load_weights无法按预期工作

时间:2017-01-07 20:35:58

标签: python serialization neural-network theano keras

我是机器学习的新手,正在浏览fast.ai的课程。我们正在了解vgg16,而且我在保存模型时遇到了问题。我不知道我做错了什么。当我从头开始学习模型,训练以了解猫与狗之间的区别时,我得到:

from __future__ import division,print_function
from vgg16 import Vgg16
import os, json
from glob import glob
import numpy as np
from matplotlib import pyplot as plt
import utils; reload(utils)
from utils import plots


np.set_printoptions(precision=4, linewidth=100)
batch_size=64

path = "dogscats/sample"
vgg = Vgg16()
# Grab a few images at a time for training and validation.
# NB: They must be in subdirectories named based on their category
batches = vgg.get_batches(path+'/train', batch_size=batch_size)
val_batches = vgg.get_batches(path+'/valid', batch_size=batch_size*2)
vgg.finetune(batches)
no_of_epochs = 4
latest_weights_filename = None
for epoch in range(no_of_epochs):
    print ("Running epoch: %d" % epoch)
    vgg.fit(batches, val_batches, nb_epoch=1)
    latest_weights_filename = ('ft%d.h5' % epoch)
    vgg.model.save_weights(path+latest_weights_filename)
print ("Completed %s fit operations" % no_of_epochs)

Found 160 images belonging to 2 classes.
Found 40 images belonging to 2 classes.
Running epoch: 0
Epoch 1/1
160/160 [==============================] - 4s - loss: 1.8980 - acc: 0.6125 - val_loss: 0.5442 - val_acc: 0.8500
Running epoch: 1
Epoch 1/1
160/160 [==============================] - 4s - loss: 0.7194 - acc: 0.8563 - val_loss: 0.2167 - val_acc: 0.9500
Running epoch: 2
Epoch 1/1
160/160 [==============================] - 4s - loss: 0.1809 - acc: 0.9313 - val_loss: 0.1604 - val_acc: 0.9750
Running epoch: 3
Epoch 1/1
160/160 [==============================] - 4s - loss: 0.2733 - acc: 0.9375 - val_loss: 0.1684 - val_acc: 0.9750
Completed 4 fit operations

但是现在当我去加载其中一个重量文件时,模型从头开始!例如,我原以为下面的模型的val_acc为0.9750!我误解了什么或做错了什么?为什么这个加载模型的val_acc如此之低?

vgg = Vgg16()
vgg.model.load_weights(path+'ft3.h5')
batches = vgg.get_batches(path+'/train', batch_size=batch_size)
val_batches = vgg.get_batches(path+'/valid', batch_size=batch_size*2)
vgg.finetune(batches)
vgg.fit(batches, val_batches, nb_epoch=1)

Found 160 images belonging to 2 classes.
Found 40 images belonging to 2 classes.
Epoch 1/1
160/160 [==============================] - 6s - loss: 1.3110 - acc: 0.6562 - val_loss: 0.5961 - val_acc: 0.8250

1 个答案:

答案 0 :(得分:2)

问题在于finetune功能。当你深入了解它的定义时:

def finetune(self, batches):
    model = self.model
    model.pop()
    for layer in model.layers: layer.trainable=False
    model.add(Dense(batches.nb_class, activation='softmax'))
    self.compile()

...通过调用pop函数可以看到 - 模型的最后一层被删除。通过这样做,您将丢失经过训练的模型中的信息。最后一层再次添加随机权重,然后再次开始训练。这就是精度下降的原因。

相关问题