验证准确性低且不变

时间:2019-07-02 17:07:30

标签: machine-learning computer-vision

我正在训练在keras中使用inception_v3 net的模型,以将图像分为4类。但是,经过多次调试后,我的验证准确性没有改变,并且训练准确性在第一个时期达到了非常高的约95%。我得到了一个庞大的数据集,其中包括407个tf记录文件。 有人可以帮我吗?

我试图将图像分为4类,分别为Crystal,Clear,Precipitate和Other,它们的标签分别为0、1、2、3。但是我的代码似乎效果不佳。 https://marco.ccr.buffalo.edu/download这是数据集的网站。

我不知道为什么训练acc增加如此之快,而验证acc甚至不超过10个纪元都不会改变。我的代码一定有问题。有人能帮我吗?

batch_size = 64
num_classes = 4
epochs = 2000
train_steps_per_epoch = 6500
vali_steps_per_epoch = 735

ignore_order = tf.data.Options()
ignore_order.experimental_deterministic = False
AUTO = tf.data.experimental.AUTOTUNE

train_files = tf.data.Dataset.list_files(r"\train*", seed=2)
train_files = train_files.with_options(ignore_order)
train_files = train_files.interleave(tf.data.TFRecordDataset, 
                             cycle_length=407,
                             num_parallel_calls=tf.data.experimental.AUTOTUNE)

validation_files = tf.data.Dataset.list_files(r"\test*", seed=3)
validation_files = validation_files.with_options(ignore_order)
validation_files = validation_files.interleave(tf.data.TFRecordDataset,
                                                cycle_length=46,
                                                num_parallel_calls=tf.data.experimental.AUTOTUNE)

def decode_example(example_proto):
    image_feature_description = {
    'image/height': tf.io.FixedLenFeature([], tf.int64),
    'image/width': tf.io.FixedLenFeature([], tf.int64),
    'image/colorspace': tf.io.FixedLenFeature([], tf.string),
    'image/channels': tf.io.FixedLenFeature([], tf.int64),
    'image/class/label': tf.io.FixedLenFeature([], tf.int64),
    'image/class/raw': tf.io.FixedLenFeature([], tf.int64),
    'image/class/source': tf.io.FixedLenFeature([], tf.int64),
    'image/class/text': tf.io.FixedLenFeature([], tf.string),
    'image/format': tf.io.FixedLenFeature([], tf.string),
    'image/filename': tf.io.FixedLenFeature([], tf.string),
    'image/id': tf.io.FixedLenFeature([], tf.int64),
    'image/encoded': tf.io.FixedLenFeature([], tf.string),
}
    parsed_features = tf.io.parse_single_example(example_proto, image_feature_description)
    height = tf.cast(parsed_features['image/height'], tf.int32)
    width = tf.cast(parsed_features['image/width'], tf.int32)
    channels = tf.cast(parsed_features['image/channels'], tf.int32)
    label = tf.cast(parsed_features['image/class/label'], tf.int32)
    image_buffer = parsed_features['image/encoded']
    image = tf.io.decode_jpeg(image_buffer, channels=3)
    image = tf.image.central_crop(image, 0.8)
    image = tf.image.resize(image, [299, 299])
    image /= 255.0
    return image, label

def image_augmentation(image, label):
    if random.random() < 0.5:
        image = tf.image.random_flip_left_right(image)
        image = tf.image.random_brightness(image=image, max_delta=32/255)
        image = tf.image.random_contrast(image, 0.5, 1.5)
        image = tf.image.random_hue(image, 0.2)
    return image, label

def processed_dataset(dataset):
    dataset = dataset.shuffle(buffer_size=10000, seed=1)
    dataset = dataset.repeat()
    dataset = dataset.batch(batch_size)
    return dataset

def data_generator(dataset):
    iter = tf.compat.v1.data.make_one_shot_iterator(dataset)
    image, label = iter.get_next()
    while True:
        yield image, tf.keras.utils.to_categorical(label, num_classes=num_classes)

train_dataset = train_files.map(decode_example, num_parallel_calls=AUTO)
train_dataset = train_dataset.map(image_augmentation, num_parallel_calls=AUTO)
train_dataset = processed_dataset(train_dataset)
validation_dataset = validation_files.map(decode_example, num_parallel_calls=AUTO)
validation_dataset = processed_dataset(validation_dataset)

结果:

Epoch 1/2000
6500/6500 [==============================] - 3942s 606ms/step - loss: 0.0065 - accuracy: 0.9982 - val_loss: 3.2384 - val_accuracy: 0.4062
Epoch 2/2000
6500/6500 [==============================] - 3151s 485ms/step - loss: 8.6521e-04 - accuracy: 1.0000 - val_loss: 3.2648 - val_accuracy: 0.4062
Epoch 3/2000
6500/6500 [==============================] - 3152s 485ms/step - loss: 8.1589e-04 - accuracy: 1.0000 - val_loss: 3.2768 - val_accuracy: 0.4062
Epoch 4/2000
6500/6500 [==============================] - 3152s 485ms/step - loss: 7.9254e-04 - accuracy: 1.0000 - val_loss: 3.2835 - val_ac

0 个答案:

没有答案