Keras神经网络的准确性仅10%

时间:2019-01-20 00:06:25

标签: python tensorflow keras conv-neural-network mnist

我正在学习如何在MNIST数据集上训练一个keras神经网络。但是,当我运行此代码时,经过10个训练周期后,我只能获得10%的准确性。这意味着神经网络仅预测一个类别,因为有10个类别。我确信这是数据准备中的错误,而不是网络体系结构的问题,因为我从教程(medium tutorial)中获得了该体系结构。知道为什么模型不训练吗?

我的代码:

from skimage import io
import numpy as np
from numpy import array
from PIL import Image
import csv
import random
from keras.preprocessing.image import ImageDataGenerator
import pandas as pd
from keras.utils import multi_gpu_model
import tensorflow as tf
train_datagen = ImageDataGenerator()
train_generator = train_datagen.flow_from_directory(
    directory="./trainingSet",
    class_mode="categorical",
    target_size=(50, 50),
    color_mode="rgb",
    batch_size=1,
    shuffle=True,
    seed=42
)
print(str(train_generator.class_indices) + " class indices")
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation, Flatten
from keras.layers.convolutional import Conv2D
from keras.layers.pooling import MaxPooling2D, GlobalAveragePooling2D
from keras.optimizers import SGD
from keras import backend as K
from keras.layers import Input
from keras.models import Model
import keras
from keras.layers.normalization import BatchNormalization

K.clear_session()
K.set_image_dim_ordering('tf')
reg = keras.regularizers.l1_l2(1e-5, 0.0)
def conv_layer(channels, kernel_size, input):
    output = Conv2D(channels, kernel_size, padding='same',kernel_regularizer=reg)(input)
    output = BatchNormalization()(output)
    output = Activation('relu')(output)
    output = Dropout(0)(output)
    return output

model = Sequential()
model.add(Conv2D(28, kernel_size=(3,3), input_shape=(50, 50, 3)))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten()) # Flattening the 2D arrays for fully connected layers
model.add(Dense(128, activation=tf.nn.relu))
model.add(Dropout(0.2))
model.add(Dense(10, activation=tf.nn.softmax))


from keras.optimizers import Adam
import tensorflow as tf

model.compile(loss='categorical_crossentropy',
              optimizer='adam',
              metrics=['accuracy'])
from keras.callbacks import ModelCheckpoint

epochs = 10

checkpoint = ModelCheckpoint('mnist.h5', save_best_only=True)

STEP_SIZE_TRAIN=train_generator.n/train_generator.batch_size
model.fit_generator(generator=train_generator,
                    steps_per_epoch=STEP_SIZE_TRAIN,
                    epochs=epochs,
                    callbacks=[checkpoint]
)

我得到的输出如下:

Using TensorFlow backend.
Found 42000 images belonging to 10 classes.
{'0': 0, '1': 1, '2': 2, '3': 3, '4': 4, '5': 5, '6': 6, '7': 7, '8': 8, '9': 9} class indices
Epoch 1/10
42000/42000 [==============================] - 174s 4ms/step - loss: 14.4503 - acc: 0.1035
/home/ec2-user/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/keras/callbacks.py:434: RuntimeWarning: Can save best model only with val_loss available, skipping.
  'skipping.' % (self.monitor), RuntimeWarning)
Epoch 2/10
42000/42000 [==============================] - 169s 4ms/step - loss: 14.4487 - acc: 0.1036
Epoch 3/10
42000/42000 [==============================] - 169s 4ms/step - loss: 14.4483 - acc: 0.1036
Epoch 4/10
42000/42000 [==============================] - 168s 4ms/step - loss: 14.4483 - acc: 0.1036
Epoch 5/10
42000/42000 [==============================] - 169s 4ms/step - loss: 14.4483 - acc: 0.1036
Epoch 6/10
42000/42000 [==============================] - 168s 4ms/step - loss: 14.4483 - acc: 0.1036
Epoch 7/10
42000/42000 [==============================] - 168s 4ms/step - loss: 14.4483 - acc: 0.1036
Epoch 8/10
42000/42000 [==============================] - 168s 4ms/step - loss: 14.4483 - acc: 0.1036
Epoch 9/10
42000/42000 [==============================] - 168s 4ms/step - loss: 14.4480 - acc: 0.1036
Epoch 10/10
 5444/42000 [==>...........................] - ETA: 2:26 - loss: 14.3979 - acc: 0.1067

trainingSet目录包含一个用于每个1-9位数字的文件夹,文件夹中包含图像。我正在使用Amazon Deep Learning Linux AMI在AWS EC2 p3.2xlarge实例上进行培训。

2 个答案:

答案 0 :(得分:1)

这是我看到的一些怪异点的列表:

  • 不缩放图像-> ImageDataGenerator(rescale=1/255)
  • 批量大小为1(您可能希望增加该数量)
  • MNIST是灰度图片,因此color_mode应该是"grayscale"

(此外,您的代码中还有几个未使用的部分,可能要从问题中删除)

答案 1 :(得分:0)

在@abcdaire的回答中再加上两个点,

  1. mnist的图片大小为(28,28),您为它指定了错误的图片。
  2. Binarization是可以使用的另一种方法。这也使网络学习更快。可以这样做。

`

imges_dataset = imges_dataset/255.0
imges_dataset = np.where(imges_dataset>0.5,1,0)