训练CNN后准确性低

时间:2019-12-13 15:21:31

标签: python tensorflow keras conv-neural-network mnist

我尝试训练一种使用Keras对手写数字进行分类的CNN模型,但是我在训练中获得的准确性较低(低于10%),并且误差很大。 我尝试了没有融合的简单神经网络,但它也不起作用。

这是我的代码。

import tensorflow as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt

(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()

#Explore data
print(y_train[12])
print(np.shape(x_train))
print(np.shape(x_test))
#we have 60000 imae for the training and 10000 for testing

# Scaling data
x_train = x_train/255
y_train = y_train/255
#reshape the data
x_train = x_train.reshape(60000,28,28,1)
x_test = x_test.reshape(10000,28,28,1)
y_train = y_train.reshape(60000,1)
y_test = y_test.reshape(10000,1)

#Create a model
model = keras.Sequential([
keras.layers.Conv2D(64,(3,3),(1,1),padding = "same",input_shape=(28,28,1)),
keras.layers.MaxPooling2D(pool_size = (2,2),padding = "valid"),
keras.layers.Conv2D(32,(3,3),(1,1),padding = "same"),
keras.layers.MaxPooling2D(pool_size = (2,2),padding = "valid"),
keras.layers.Flatten(),
keras.layers.Dense(128,activation = "relu"),
keras.layers.Dense(10,activation = "softmax")])

model.compile(optimizer = "adam",
loss = "sparse_categorical_crossentropy",
metrics  = ['accuracy'])

model.fit(x_train,y_train,epochs=10)
test_loss,test_acc = model.evaluate(x_test,y_test)
print("\ntest accuracy:",test_acc)

有人可以建议我如何改善我的模型吗?

2 个答案:

答案 0 :(得分:2)

您的问题在这里:

x_train = x_train/255
y_train = y_train/255 # makes no sense

您应该重新缩放x_test,而不是y_train

x_train = x_train/255
x_test = x_test/255

这可能只是您的错别字。更改这些行,您的准确性将达到95%以上。

答案 1 :(得分:1)

您的模型存在缩放问题,请尝试使用tf 2.0

x_train /= 255
x_test /= 255

您无需扩展所有测试数据 如您所愿:

x_train = x_train/255
y_train = y_train/255

然后,我们可以将标签转换为一键编码

from tensorflow.keras.utils import to_categorical

y_train = to_categorical(y_train, 10)

y_test = to_categorical(y_test, 10)

这有助于:

loss='categorical_crossentropy', 

Sequential API允许我们在彼此之上堆叠层。唯一的缺点是,使用这些模型时,我们不能有多个输出或输入。不过,我们可以创建一个Sequential对象,并使用add()函数向我们的模型添加图层。 尝试使用更多API,以使您的模型更加平滑和准确,因为Tf 2.0中存在添加功能 因为我们可以给Conv2D 4次平滑的时间:

seq_model.add(Conv2D(filters=32, kernel_size=(5,5), activation='relu', 
input_shape=x_train.shape[1:]))
seq_model.add(Conv2D(filters=32, kernel_size=(5,5), activation='relu'))
seq_model.add(Conv2D(filters=64, kernel_size=(3, 3), activation='relu'))
seq_model.add(Conv2D(filters=64, kernel_size=(3, 3), activation='relu'))

在代码中,您可以使用dropout:

seq_model.add(Dropout(rate=0.25))

完整模型:

%tensorflow_version 2.x
from tensorflow.keras.datasets import mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()    
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
x_train = x_train.reshape(x_train.shape[0], 28, 28, 1)
x_test = x_test.reshape(x_test.shape[0], 28, 28, 1)
from tensorflow.keras.utils import to_categorical
y_train = to_categorical(y_train, 10)
y_test = to_categorical(y_test, 10)
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPool2D, Dense, Flatten, Dropout

seq_model = Sequential()
seq_model.add(Conv2D(filters=32, kernel_size=(5,5), activation='relu', 
input_shape=x_train.shape[1:]))
seq_model.add(Conv2D(filters=32, kernel_size=(5,5), activation='relu'))
seq_model.add(MaxPool2D(pool_size=(2, 2)))
seq_model.add(Dropout(rate=0.25))
seq_model.add(Conv2D(filters=64, kernel_size=(3, 3), activation='relu'))
seq_model.add(Conv2D(filters=64, kernel_size=(3, 3), activation='relu'))
seq_model.add(MaxPool2D(pool_size=(2, 2)))
seq_model.add(Dropout(rate=0.25))
seq_model.add(Flatten())
seq_model.add(Dense(256, activation='relu'))
seq_model.add(Dropout(rate=0.5))
seq_model.add(Dense(10, activation='softmax'))


seq_model.compile(
    loss='categorical_crossentropy', 
    optimizer='adam', 
    metrics=['accuracy']
)

epochsz = 3 # number of epch 
batch_sizez = 32 # the batch size ,can be 64 , 128 so other
seq_model.fit(x_train,y_train, batch_size=batch_sizez, epochs=epochsz)

结果:

Train on 60000 samples
Epoch 1/3

60000/60000 [==============================] - 186s 3ms/sample - loss: 0.1379 - accuracy: 0.9588

Epoch 2/3
60000/60000 [==============================] - 187s 3ms/sample - loss: 0.0677 - accuracy: 0.9804
Epoch 3/3

60000/60000 [==============================] - 187s 3ms/sample - loss: 0.0540 - accuracy: 0.9840