Keras-训练损失与验证损失

时间:2018-10-29 14:28:22

标签: tensorflow keras

仅出于争论的目的,我在训练期间使用相同的数据进行训练和验证,例如:

model.fit_generator(
    generator=train_generator,
    epochs=EPOCHS,
    steps_per_epoch=train_generator.n // BATCH_SIZE,
    validation_data=train_generator,
    validation_steps=train_generator.n // BATCH_SIZE
)

因此,我希望每个时期结束时的损失以及训练和验证的准确性几乎是相同的吗?仍然看起来像这样:

Epoch 1/150
26/26 [==============================] - 55s 2s/step - loss: 1.5520 - acc: 0.3171 - val_loss: 1.6646 - val_acc: 0.2796
Epoch 2/150
26/26 [==============================] - 46s 2s/step - loss: 1.2924 - acc: 0.4996 - val_loss: 1.5895 - val_acc: 0.3508
Epoch 3/150
26/26 [==============================] - 46s 2s/step - loss: 1.1624 - acc: 0.5873 - val_loss: 1.6197 - val_acc: 0.3262
Epoch 4/150
26/26 [==============================] - 46s 2s/step - loss: 1.0601 - acc: 0.6265 - val_loss: 1.9420 - val_acc: 0.3150
Epoch 5/150
26/26 [==============================] - 46s 2s/step - loss: 0.9790 - acc: 0.6640 - val_loss: 1.9667 - val_acc: 0.2823
Epoch 6/150
26/26 [==============================] - 46s 2s/step - loss: 0.9191 - acc: 0.6951 - val_loss: 1.8594 - val_acc: 0.3342
Epoch 7/150
26/26 [==============================] - 46s 2s/step - loss: 0.8811 - acc: 0.7087 - val_loss: 2.3223 - val_acc: 0.2869
Epoch 8/150
26/26 [==============================] - 46s 2s/step - loss: 0.8148 - acc: 0.7379 - val_loss: 1.9683 - val_acc: 0.3358
Epoch 9/150
26/26 [==============================] - 46s 2s/step - loss: 0.8068 - acc: 0.7307 - val_loss: 2.1053 - val_acc: 0.3312

为什么尽管精度来自同一数据源,但尤其是精度却有很大差异?我不知道如何计算此方法吗?


生成器是这样创建的:

train_images = keras.preprocessing.image.ImageDataGenerator(
    rescale=1./255
)

train_generator = train_images.flow_from_directory(
    directory="data/superheros/images/train",
    target_size=(299, 299),
    batch_size=BATCH_SIZE,
    shuffle=True
)

是的,它会重排图像,但是在遍历所有图像以进行验证时,精度是否至少应接近?


所以模型看起来像这样:

inceptionV3 = keras.applications.inception_v3.InceptionV3(include_top=False)

features = inceptionV3.output

net = keras.layers.GlobalAveragePooling2D()(features)
predictions = keras.layers.Dense(units=2, activation="softmax")(net)

for layer in inceptionV3.layers:
    layer.trainable = False

model = keras.Model(inputs=inceptionV3.input, outputs=predictions)
optimizer = keras.optimizers.RMSprop()
model.compile(
    optimizer=optimizer,
    loss="categorical_crossentropy",
     metrics=['accuracy']
)

因此没有辍学或其他任何东西,只是inceptionv3在顶部带有softmax层。我希望精度会有所不同,但幅度不会很大。

1 个答案:

答案 0 :(得分:0)

如果Keras检索训练数据和验证数据(如果它是生成器)的话,您确定train_generator会返回相同的数据吗?

名称为generator,我希望它不是:)

相关问题