Keras在n批次后更新权重

时间:2019-01-10 18:25:01

标签: python keras

x = Input(shape=(72,300))
aux_input = Input(shape=(72, 3))
probs = Input(shape=(1,))
dim_reduct = Dense(5, activation='tanh')(x)
cat = concatenate([dim_reduct, aux_input])
encoded = LSTM(64)(cat)
#cat2 = concatenate([encoded, probs])
#output = Dense(1)(encoded)
lam = Lambda(lambda x: K.sum(x, axis=1))(encoded)
#lam2 = Lambda(lambda x: K.sum(x, axis=1))(lam)
probs_aug = Lambda(lambda x: x * .01)(probs)
output = Add()([lam, probs_aug])

sgd = optimizers.Adam(lr=0.01)#, nesterov=True, momentum=.9, decay=1e-5)
lstm_model = Model(inputs=[x, aux_input, probs], outputs=output)
lstm_model.compile(optimizer=sgd, loss=mask_loss, metrics=[mask_acc, beam_acc])

lstm_model.fit([X_train, cat_feats_train, train_probs], y_train, batch_size=beam_size,
               epochs=10, verbose=1, shuffle=False,validation_data=([X_dev, cat_feats_dev, dev_probs], y_dev))
               #, callbacks=[PlotLossesCallback()])

我正在训练一个batch_size等于10的模型,因为数据中每10个一组代表1个完整输入,应该一起计算和评估。这意味着从本质上讲,每个输入的梯度都会更新。有没有一种方法可以将梯度更新延迟到每N个批次以避免这种情况?

0 个答案:

没有答案