tensorflow.keras模型的准确性,损失和验证指标在30个训练周期内保持不变

时间:2020-05-04 01:26:57

标签: python tensorflow machine-learning keras deep-learning

我已经写了一个使用MFCC频谱图的CNN,旨在将图像分为五个不同的类别。我为模型训练了30个时期,第一个时期之后,指标没有变化。分类不平衡可能会造成问题,如果是这样,如果可能的话,我将如何对数据集的模型进行偏倚?以下是数据生成器代码,模型定义和输出。原始模型有两个附加层,但是,当我尝试对问题进行故障排除时,我开始进行调整

数据生成器定义:

path = 'path_to_dataset'
CLASS_NAMES = ['belly_pain', 'burping', 'discomfort', 'hungry', 'tired']
CLASS_NAMES = np.array(CLASS_NAMES)
BATCH_SIZE = 32
IMG_HEIGHT = 150
IMG_WIDTH = 150
# 457 is the number of images total
STEPS_PER_EPOCH = np.ceil(457/BATCH_SIZE)

img_generator = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, validation_split=0.2, horizontal_flip=True, rotation_range=45, width_shift_range=.15, height_shift_range=.15)

train_data_gen = img_generator.flow_from_directory( directory=path, batch_size=BATCH_SIZE, shuffle=True, target_size=(IMG_HEIGHT, IMG_WIDTH), classes = list(CLASS_NAMES), subset='training', class_mode='categorical')

validation_data_gen = img_generator.flow_from_directory( directory=path, batch_size=BATCH_SIZE, shuffle=True, target_size=(IMG_HEIGHT, IMG_WIDTH), classes = list(CLASS_NAMES), subset='validation', class_mode='categorical')

模型定义:

EPOCHS = 30

model = Sequential([
    Conv2D(128, 3, activation='relu', 
           input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
    MaxPooling2D(),
    Flatten(),
    Dense(512, activation='sigmoid'),
    Dense(1)
])

opt = tf.keras.optimizers.Adamax(lr=0.001)
model.compile(optimizer=opt,
              loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
              metrics=['accuracy'])

前5个时期:

Epoch 1/30
368/368 [==============================] - 371s 1s/step - loss: 0.6713 - accuracy: 0.8000 - val_loss: 0.5004 - val_accuracy: 0.8000
Epoch 2/30
368/368 [==============================] - 235s 640ms/step - loss: 0.5004 - accuracy: 0.8000 - val_loss: 0.5004 - val_accuracy: 0.8000
Epoch 3/30
368/368 [==============================] - 233s 633ms/step - loss: 0.5004 - accuracy: 0.8000 - val_loss: 0.5004 - val_accuracy: 0.8000
Epoch 4/30
368/368 [==============================] - 236s 641ms/step - loss: 0.5004 - accuracy: 0.8000 - val_loss: 0.5004 - val_accuracy: 0.8000
Epoch 5/30
368/368 [==============================] - 234s 636ms/step - loss: 0.5004 - accuracy: 0.8000 - val_loss: 0.5004 - val_accuracy: 0.8000

最近五个时期:

Epoch 25/30
368/368 [==============================] - 231s 628ms/step - loss: 0.5004 - accuracy: 0.8000 - val_loss: 0.5004 - val_accuracy: 0.8000
Epoch 26/30
368/368 [==============================] - 227s 617ms/step - loss: 0.5004 - accuracy: 0.8000 - val_loss: 0.5004 - val_accuracy: 0.8000
Epoch 27/30
368/368 [==============================] - 228s 620ms/step - loss: 0.5004 - accuracy: 0.8000 - val_loss: 0.5004 - val_accuracy: 0.8000
Epoch 28/30
368/368 [==============================] - 234s 636ms/step - loss: 0.5004 - accuracy: 0.8000 - val_loss: 0.5004 - val_accuracy: 0.8000
Epoch 29/30
368/368 [==============================] - 235s 638ms/step - loss: 0.5004 - accuracy: 0.8000 - val_loss: 0.5004 - val_accuracy: 0.8000
Epoch 30/30
368/368 [==============================] - 234s 636ms/step - loss: 0.5004 - accuracy: 0.8000 - val_loss: 0.5004 - val_accuracy: 0.8000

1 个答案:

答案 0 :(得分:2)

您正在尝试完成4类的分类任务,但最后一层仅包含一个神经元。

它应该是具有4个神经元和softmax激活的致密层:

Dense(4, activation="softmax")

您还需要根据分类损失来更改损失函数,例如categorical_crossentropy

相关问题