在训练MLP网络期间,损失停止减少

时间:2018-05-03 15:46:52

标签: tensorflow deep-learning keras

为了预测目的,我建立了基于1D卷积的MLP,模型架构如下所示。然而,该模型的训练过程往往仅在两个时期之后停止。培训过程统计如下所示,可能是什么原因以及如何修改它?代码就是这个

loss_function = 'mean_squared_error'
optimizer = 'Adagrad'
batch_size = 256
nr_of_epochs = 120
inputs = Input(shape=(64,1))
outX = Conv1D(60, 32, strides=1, activation='relu',padding='causal')(inputs)
outX = Conv1D(80, 10, activation='relu',padding='causal')(outX)
outX = Conv1D(100, 5, activation='relu',padding='causal')(outX)
outX = MaxPooling1D(2)(outX)
outX = Dense(300, activation='relu')(outX)
outX = Flatten()(outX)
predictions = Dense(1,activation='linear')(outX)
model = Model(inputs=[inputs],outputs=predictions)
print(model.summary())
model.compile(loss=loss_function, optimizer=optimizer,metrics=['mse','mae'])
history=model.fit(X_train, Y_train, batch_size=batch_size, validation_data=(X_val,Y_val), epochs=nr_of_epochs,verbose=2)

enter image description here

enter image description here

1 个答案:

答案 0 :(得分:0)

我认为您使用module "QtQuick.Controls" version 2.2 is not installed 的默认学习率,根据我过去使用Keras的经验,我可以说您可能需要降低学习率,可能是0.010.001

您可以在此功能签名中找到默认值:Adagrad

0.0001值更改为此类

optimizer

保持其他代码相同,我认为你应该看到一些改进。