为什么tf.keras模型在mnist数据集上的表现严重逊色?

时间:2018-08-06 21:00:53

标签: tensorflow keras

我使用tf.keras图层和模型为mnist创建了一个简单模型。我已经使用tf数据api将数据提供给模型。但是,这种方法的性能非常差(mnist的精度为77%)。我已经在tensorflow github页面上阅读了类似的问题,但是它们都没有任何解决方案而被关闭。这是可以运行的自包含代码:

注意:您需要Tensorflow 1.9才能运行此

import tensorflow as tf 

_EPOCHS      = 20
_NUM_CLASSES = 10
_BATCH_SIZE  = 128

def training_pipeline():
  # #############
  # Load Dataset
  # #############
  (x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
  training_set = tfdata_generator(x_train, y_train, is_training=True, batch_size=_BATCH_SIZE)
  testing_set  = tfdata_generator(x_test, y_test, is_training=False, batch_size=_BATCH_SIZE)

  # #############
  # Train Model
  # #############
  model = baseline_model()  # your keras model here
  # model.compile('adam', 'categorical_crossentropy', metrics=['acc'])
  model.fit(
      training_set,
      steps_per_epoch=len(x_train) // _BATCH_SIZE,
      epochs=_EPOCHS,
      validation_data=testing_set,
      validation_steps=len(x_test) // _BATCH_SIZE,
      verbose = 1)


def tfdata_generator(images, labels, is_training, batch_size=128):
    '''Construct a data generator using tf.Dataset'''

    def preprocess_fn(image, label):
        '''A transformation function to preprocess raw data
        into trainable input. '''
        x = tf.reshape(tf.cast(image, tf.float32), (28, 28, 1))
        y = tf.one_hot(tf.cast(label, tf.uint8), _NUM_CLASSES)
        return x, y

    dataset = tf.data.Dataset.from_tensor_slices((images, labels))
    if is_training:
        dataset = dataset.shuffle(1000)  # depends on sample size

    # Transform and batch data at the same time
    dataset = dataset.apply(tf.contrib.data.map_and_batch(
        preprocess_fn, batch_size,
        num_parallel_batches=4,  # cpu cores
        drop_remainder=True if is_training else False))
    dataset = dataset.repeat()
    dataset = dataset.prefetch(tf.contrib.data.AUTOTUNE)

    return dataset

def baseline_model():
    # create model
    model = tf.keras.Sequential()
    model.add(tf.keras.layers.Conv2D(32, (5, 5), input_shape=(28, 28, 1), activation='relu'))
    model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2)))
    model.add(tf.keras.layers.Dropout(0.2))
    model.add(tf.keras.layers.Flatten())
    model.add(tf.keras.layers.Dense(128, activation='relu'))
    model.add(tf.keras.layers.Dense(10, activation='softmax'))
    # Compile model
    model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
    return model

if __name__ == '__main__':
    training_pipeline()

我跑了4次,每次的准确度差别很大,为40%到77%。那么,为什么它表现这么差?预先感谢。

0 个答案:

没有答案
相关问题