卷积神经网络 - Keras-val_acc Keyerror' acc'

时间:2017-03-09 07:20:54

标签: python keras

我正试图通过Theano实施CNN。我用过Keras库。我的数据集是55个字母图像,28x28。

在最后一部分我收到此错误: enter image description here

train_acc=hist.history['acc']
KeyError: 'acc'

非常感谢任何帮助。感谢。

这是我的代码的一部分:



from keras.models import Sequential
from keras.models import Model
from keras.layers.core import Dense, Dropout, Activation, Flatten
from keras.layers.convolutional import Convolution2D, MaxPooling2D
from keras.optimizers import SGD, RMSprop, adam
from keras.utils import np_utils

import matplotlib
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from urllib.request import urlretrieve
import pickle
import os
import gzip
import numpy as np
import theano
import lasagne
from lasagne import layers
from lasagne.updates import nesterov_momentum
from nolearn.lasagne import NeuralNet
from nolearn.lasagne import visualize
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from PIL import Image
import PIL.Image
#from Image import *
import webbrowser
from numpy import *
from sklearn.utils import shuffle
from sklearn.cross_validation import train_test_split
from tkinter import *
from tkinter.ttk import *
import tkinter

from keras import backend as K
K.set_image_dim_ordering('th')
%%%%%%%%%%

batch_size = 10

# number of output classes
nb_classes = 6

# number of epochs to train
nb_epoch = 5

# input iag dimensions
img_rows, img_clos = 28,28

# number of channels
img_channels = 3

# number of convolutional filters to use
nb_filters = 32

# number of convolutional filters to use
nb_pool = 2

# convolution kernel size
nb_conv = 3

%%%%%%%%

model = Sequential()

model.add(Convolution2D(nb_filters, nb_conv, nb_conv,
                        border_mode='valid',
                        input_shape=(1, img_rows, img_clos)))
convout1 = Activation('relu')
model.add(convout1)
model.add(Convolution2D(nb_filters, nb_conv, nb_conv))
convout2 = Activation('relu')
model.add(convout2)
model.add(MaxPooling2D(pool_size=(nb_pool, nb_pool)))
model.add(Dropout(0.5))

model.add(Flatten())
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adadelta')

%%%%%%%%%%%%

hist = model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch,
              show_accuracy=True, verbose=1, validation_data=(X_test, Y_test))
            
            
hist = model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch,
              show_accuracy=True, verbose=1, validation_split=0.2)
%%%%%%%%%%%%%%

train_loss=hist.history['loss']
val_loss=hist.history['val_loss']
train_acc=hist.history['acc']
val_acc=hist.history['val_acc']
xc=range(nb_epoch)
#xc=range(on_epoch_end)

plt.figure(1,figsize=(7,5))
plt.plot(xc,train_loss)
plt.plot(xc,val_loss)
plt.xlabel('num of Epochs')
plt.ylabel('loss')
plt.title('train_loss vs val_loss')
plt.grid(True)
plt.legend(['train','val'])
print (plt.style.available) # use bmh, classic,ggplot for big pictures
plt.style.use(['classic'])

plt.figure(2,figsize=(7,5))
plt.plot(xc,train_acc)
plt.plot(xc,val_acc)
plt.xlabel('num of Epochs')
plt.ylabel('accuracy')
plt.title('train_acc vs val_acc')
plt.grid(True)
plt.legend(['train','val'],loc=4)
#print plt.style.available # use bmh, classic,ggplot for big pictures
plt.style.use(['classic'])




8 个答案:

答案 0 :(得分:5)

在不太常见的情况下(如我在一些张量流更新后所预期的那样),尽管在模型定义中选择了 metrics = [“ accuracy”] ,我仍然遇到相同的错误。

解决方案是:将 metrics = [“ acc”] 替换为 metrics = [“ accuracy”] 。就我而言,我无法绘制训练历史的参数。我不得不更换

acc = history.history['acc']
val_acc = history.history['val_acc']

loss = history.history['loss']
val_loss = history.history['val_loss']

acc = history.history['accuracy']
val_acc = history.history['val_accuracy']

loss = history.history['loss']
val_loss = history.history['val_loss']

答案 1 :(得分:4)

您可以使用print(history.history.keys())来找到您拥有的度量标准以及它们被称为什么。就我而言,它也被称为"accuracy",而不是"acc"

答案 2 :(得分:2)

来自keras source

warnings.warn('The "show_accuracy" argument is deprecated, '
                          'instead you should pass the "accuracy" metric to '
                          'the model at compile time:\n'
                          '`model.compile(optimizer, loss, '
                          'metrics=["accuracy"])`')

获得准确性的正确方法确实是按照以下方式编译模型:

model.compile(loss='categorical_crossentropy', optimizer='adadelta', metrics=["accuracy"])

有用吗?

答案 3 :(得分:1)

请务必选中此“ breaking change”:

度量和损失现在以用户指定的确切名称报告(例如,如果您传递metrics = ['acc'],则度量将在字符串“ acc”下报告,而不是在“ accuracy”下,而度量则相反) = ['accuracy']将在字符串“ accuracy”下报告。

答案 4 :(得分:0)

编译模型时,您的log变量将与metrics一致。

例如,以下代码

model.compile(loss="mean_squared_error", optimizer=optimizer) 
model.fit_generator(gen,epochs=50,callbacks=ModelCheckpoint("model_{acc}.hdf5")])

将给出一个KeyError: 'acc',因为您没有在metrics=["accuracy"]中设置model.compile

当指标不匹配时也会发生此错误。例如

model.compile(loss="mean_squared_error",optimizer=optimizer, metrics="binary_accuracy"]) 
model.fit_generator(gen,epochs=50,callbacks=ModelCheckpoint("model_{acc}.hdf5")])

仍然给出一个KeyError: 'acc',因为您设置了一个binary_accuracy指标,但稍后又请求了accuracy

如果将上面的代码更改为

model.compile(loss="mean_squared_error",optimizer=optimizer, metrics="binary_accuracy"]) 
model.fit_generator(gen,epochs=50,callbacks=ModelCheckpoint("model_{binary_accuracy}.hdf5")])

它将起作用。

答案 5 :(得分:0)

在我的情况下,从

geojson_file['features'].append(json.loads(recentfilecontent))

metrics=["accuracy"]

是解决方案。因此,将它们从一个切换到另一个可能会有所帮助

答案 6 :(得分:0)

如果您使用Tensorflow 2.3,则可以这样指定

model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.001),
          loss=tf.keras.losses.CategoricalCrossentropy(), metrics=[tf.keras.metrics.CategoricalAccuracy(name="acc")])

答案 7 :(得分:0)

在新版本的 TensorFlow 中,有些地方发生了变化,因此我们必须将其替换为:

acc = history.history['accuracy']