CuDNNGRU模型过度拟合,acc和val_acc不变

时间:2019-09-11 19:50:57

标签: tensorflow keras neural-network

所以我对此很陌生。在模型上预测时间序列问题后,在第25个时期后它很快变得过拟合,我做了一些辍学并降低了网络的复杂性,但过一会儿模型仍然无法抵抗过拟合。 acc和val_acc也从头到尾冻结。这是网络架构:

regressor = Sequential()

regressor.add(CuDNNGRU(units = 50, return_sequences = True, input_shape = (X_train.shape[1], 1)))
regressor.add(Dropout(0.3))

regressor.add(CuDNNGRU(units = 50, return_sequences = True))
regressor.add(Dropout(0.3))

regressor.add(CuDNNGRU(units = 50, return_sequences =False))
regressor.add(Dropout(0.3))

# i've intentionally commented out this layer to reduce complexity
# regressor.add(CuDNNGRU(units = 50))
# regressor.add(Dropout(0.2))

regressor.add(Dense(units = 1))

regressor.compile(optimizer = 'adam', loss = 'mean_squared_error', metrics=['accuracy'])

history = regressor.fit(X_train, y_train, epochs = 60, batch_size = 64, validation_split=0.2)

基本上,这就是时代的解散方式:

Train on 127936 samples, validate on 31984 samples
Epoch 1/60
127936/127936 [==============================] - 99s 772us/step - loss: 0.0015 - acc: 7.8164e-06 - val_loss: 0.0012 - val_acc: 3.1266e-05
Epoch 2/60
127936/127936 [==============================] - 98s 763us/step - loss: 3.1164e-04 - acc: 7.8164e-06 - val_loss: 8.9437e-04 - val_acc: 3.1266e-05
Epoch 3/60
127936/127936 [==============================] - 97s 762us/step - loss: 2.3211e-04 - acc: 7.8164e-06 - val_loss: 0.0010 - val_acc: 3.1266e-05
Epoch 4/60
127936/127936 [==============================] - 98s 765us/step - loss: 2.2029e-04 - acc: 7.8164e-06 - val_loss: 0.0016 - val_acc: 3.1266e-05
Epoch 5/60
127936/127936 [==============================] - 97s 760us/step - loss: 2.1758e-04 - acc: 7.8164e-06 - val_loss: 0.0028 - val_acc: 3.1266e-05
Epoch 6/60
127936/127936 [==============================] - 98s 765us/step - loss: 2.1232e-04 - acc: 7.8164e-06 - val_loss: 0.0016 - val_acc: 3.1266e-05
Epoch 7/60
127936/127936 [==============================] - 97s 761us/step - loss: 2.1088e-04 - acc: 7.8164e-06 - val_loss: 0.0012 - val_acc: 3.1266e-05
Epoch 8/60
127936/127936 [==============================] - 97s 760us/step - loss: 2.0391e-04 - acc: 7.8164e-06 - val_loss: 0.0012 - val_acc: 3.1266e-05
Epoch 9/60
127936/127936 [==============================] - 97s 761us/step - loss: 2.0780e-04 - acc: 7.8164e-06 - val_loss: 6.3458e-04 - val_acc: 3.1266e-05
Epoch 10/60
127936/127936 [==============================] - 97s 762us/step - loss: 2.0697e-04 - acc: 7.8164e-06 - val_loss: 5.3828e-04 - val_acc: 3.1266e-05
Epoch 11/60
127936/127936 [==============================] - 98s 763us/step - loss: 2.0546e-04 - acc: 7.8164e-06 - val_loss: 4.7978e-04 - val_acc: 3.1266e-05
Epoch 12/60
127936/127936 [==============================] - 98s 763us/step - loss: 2.0410e-04 - acc: 7.8164e-06 - val_loss: 3.6436e-04 - val_acc: 3.1266e-05
Epoch 13/60
127936/127936 [==============================] - 98s 763us/step - loss: 2.0250e-04 - acc: 7.8164e-06 - val_loss: 3.5326e-04 - val_acc: 3.1266e-05
Epoch 14/60
127936/127936 [==============================] - 97s 758us/step - loss: 2.0288e-04 - acc: 7.8164e-06 - val_loss: 4.8453e-04 - val_acc: 3.1266e-05
Epoch 15/60
127936/127936 [==============================] - 98s 763us/step - loss: 2.0580e-04 - acc: 7.8164e-06 - val_loss: 0.0014 - val_acc: 3.1266e-05
Epoch 16/60
127936/127936 [==============================] - 97s 760us/step - loss: 2.0156e-04 - acc: 7.8164e-06 - val_loss: 0.0011 - val_acc: 3.1266e-05
Epoch 17/60
127936/127936 [==============================] - 97s 762us/step - loss: 2.0055e-04 - acc: 7.8164e-06 - val_loss: 0.0012 - val_acc: 3.1266e-05
Epoch 18/60
127936/127936 [==============================] - 97s 759us/step - loss: 2.0162e-04 - acc: 7.8164e-06 - val_loss: 0.0013 - val_acc: 3.1266e-05
Epoch 19/60
127936/127936 [==============================] - 97s 761us/step - loss: 1.9856e-04 - acc: 7.8164e-06 - val_loss: 7.1617e-04 - val_acc: 3.1266e-05
Epoch 20/60
127936/127936 [==============================] - 97s 758us/step - loss: 2.0146e-04 - acc: 7.8164e-06 - val_loss: 8.7160e-04 - val_acc: 3.1266e-05
Epoch 21/60
127936/127936 [==============================] - 97s 761us/step - loss: 2.0139e-04 - acc: 7.8164e-06 - val_loss: 0.0017 - val_acc: 3.1266e-05
Epoch 22/60
127936/127936 [==============================] - 97s 760us/step - loss: 2.0001e-04 - acc: 7.8164e-06 - val_loss: 0.0013 - val_acc: 3.1266e-05
Epoch 23/60
127936/127936 [==============================] - 97s 761us/step - loss: 2.0003e-04 - acc: 7.8164e-06 - val_loss: 7.9431e-04 - val_acc: 3.1266e-05
Epoch 24/60
127936/127936 [==============================] - 98s 763us/step - loss: 1.9823e-04 - acc: 7.8164e-06 - val_loss: 0.0011 - val_acc: 3.1266e-05
Epoch 25/60
127936/127936 [==============================] - 98s 762us/step - loss: 1.9902e-04 - acc: 7.8164e-06 - val_loss: 0.0012 - val_acc: 3.1266e-05
Epoch 26/60
127936/127936 [==============================] - 97s 762us/step - loss: 1.9857e-04 - acc: 7.8164e-06 - val_loss: 0.0012 - val_acc: 3.1266e-05
Epoch 27/60
127936/127936 [==============================] - 97s 761us/step - loss: 1.9652e-04 - acc: 7.8164e-06 - val_loss: 0.0020 - val_acc: 3.1266e-05
Epoch 28/60
127936/127936 [==============================] - 97s 760us/step - loss: 1.9940e-04 - acc: 7.8164e-06 - val_loss: 0.0017 - val_acc: 3.1266e-05
Epoch 29/60
127936/127936 [==============================] - 97s 758us/step - loss: 1.9802e-04 - acc: 7.8164e-06 - val_loss: 0.0032 - val_acc: 3.1266e-05
Epoch 30/60
127936/127936 [==============================] - 97s 761us/step - loss: 1.9759e-04 - acc: 7.8164e-06 - val_loss: 0.0018 - val_acc: 3.1266e-05
Epoch 31/60
127936/127936 [==============================] - 97s 757us/step - loss: 1.9716e-04 - acc: 7.8164e-06 - val_loss: 0.0033 - val_acc: 3.1266e-05
Epoch 32/60
127936/127936 [==============================] - 97s 760us/step - loss: 1.9611e-04 - acc: 7.8164e-06 - val_loss: 0.0038 - val_acc: 3.1266e-05
Epoch 33/60
127936/127936 [==============================] - 97s 762us/step - loss: 1.9661e-04 - acc: 7.8164e-06 - val_loss: 0.0066 - val_acc: 3.1266e-05
Epoch 34/60
127936/127936 [==============================] - 98s 762us/step - loss: 1.9186e-04 - acc: 7.8164e-06 - val_loss: 0.0068 - val_acc: 3.1266e-05
Epoch 35/60
127936/127936 [==============================] - 97s 762us/step - loss: 1.9449e-04 - acc: 7.8164e-06 - val_loss: 0.0077 - val_acc: 3.1266e-05
如您所见,

在第23个时期之后val_loss开始飙升。为什么在这个模型中为什么acc和val_acc永远没有机会?

0 个答案:

没有答案