使用Python尽早停止在lstm中

时间:2018-10-15 17:34:33

标签: python-3.x tensorflow lstm

如何尽早停止lstm。

我使用的是python tensorflow,而不是keras。

如果能提供示例python代码,我将不胜感激。

致谢

2 个答案:

答案 0 :(得分:1)

您可以使用checkpoints

from keras.callbacks import EarlyStopping
earlyStop=EarlyStopping(monitor="val_loss",verbose=2,mode='min',patience=3)
history=model.fit(xTrain,yTrain,epochs=100,batch_size=10,validation_data=(xTest,yTest) ,verbose=2,callbacks=[earlyStop])

即使在3个历时(mode='min')之后,“ val_loss”也没有减少(patience=3),训练将停止

#Didn't realize u were note using keras

答案 1 :(得分:0)

只需一点搜索就可以找到它 https://github.com/mmuratarat/handson-ml/blob/master/11_deep_learning.ipynb

max_checks_without_progress = 20
checks_without_progress = 0
best_loss = np.infty

....

    if loss_val < best_loss:
        save_path = saver.save(sess, './my_mnist_model.ckpt')
        best_loss = loss_val
        check_without_progress = 0
    else:
        check_without_progress +=1
        if check_without_progress > max_checks_without_progress:
            print("Early stopping!")
            break

    print("Epoch: {:d} - ".format(epoch), \
          "Training Loss: {:.5f}, ".format(loss_train), \
          "Training Accuracy: {:.2f}%, ".format(accuracy_train*100), \
          "Validation Loss: {:.4f}, ".format(loss_val), \
          "Best Loss: {:.4f}, ".format(best_loss), \
          "Validation Accuracy: {:.2f}%".format(accuracy_val*100))
相关问题