keras LSTM val_loss在培训中始终返回NaN

时间:2019-03-06 00:18:49

标签: python keras lstm

所以我正在使用以下代码在股票数据上训练我的模型:

....



generator = batch_generator(
        sequence_length=SEQ, testsize=testsize, x_train_g=x_train, y_train_g=y_train)
    test_generator = batch_generator(
        sequence_length=SEQ,testsize=testsize, x_train_g=x_test, y_train_g=y_test_reshaped)
    x_batch, y_batch = next(generator)

...
    model.add(Dense(num_y_signals, activation='sigmoid'))

    model.compile(loss='mse', optimizer='rmsprop', metrics=["mae"])




history = model.fit_generator(generator=generator, verbose=1, validation_data=test_generator, validation_steps=10,
                                  epochs=80,
                                  steps_per_epoch=20, 
                                  )

def batch_generator(sequence_length, testsize, x_train_g, y_train_g, batch_size=256):

    warmup_steps = 30
    num_x_signals = len(x_train_g[0])
    num_y_signals = 1
    while True:
        x_shape = (batch_size, sequence_length, num_x_signals)
        x_batch = np.zeros(shape=x_shape, dtype=np.float16)

        y_shape = (batch_size, sequence_length, num_y_signals)
        y_batch = np.zeros(shape=y_shape, dtype=np.float16)

        for i in range(batch_size):

            idx = np.random.randint(testsize - sequence_length)
            x_batch[i] = x_train_g[idx:idx+sequence_length]
            y_batch[i] = y_train_g[idx:idx+sequence_length]

        yield (x_batch, y_batch)

但是,总是在训练时,验证损失始终为“ NaN” 我尝试了不同的激活函数和优化器,但没有帮助。

我相信错误很简单,但是,我无法弄清楚。

1 个答案:

答案 0 :(得分:0)

好的,我发现了错误: 我的验证集包含NaN值。