LSTM的准确性极低

时间:2019-10-14 19:07:48

标签: tensorflow keras lstm

我对配置的低精度一无所知(始终为0.1508)。 数据形状:(1476,1000,1)

scaler = MinMaxScaler(feature_range=(0,1))
scaled_X = scaler.fit_transform(train_Data)

....

 myModel = Sequential()

 myModel.add(LSTM(128,input_shape=(myData.shape[1:]),activation='relu',return_sequences=True))
 myModel.add(Dropout(0.2))
 myModel.add(BatchNormalization())

 myModel.add(LSTM(128,activation='relu',return_sequences=True))
 myModel.add(Dropout(0.2))
 myModel.add(BatchNormalization())

 myModel.add(LSTM(64,activation='relu',return_sequences=True))
 myModel.add(Dropout(0.2))
 myModel.add(BatchNormalization())

 myModel.add(LSTM(32,activation='relu'))
 myModel.add(Dropout(0.2))
 myModel.add(BatchNormalization())

 myModel.add(Dense(16,activation='relu'))
 myModel.add(Dropout(0.2))

 myModel.add(Dense(8,activation='softmax'))
 #myModel.add(Dropout(0.2))

 opt = tf.keras.optimizers.SGD(lr=0.001,decay=1e-6)
 ls  = tf.keras.losses.categorical_crossentropy

有时还会出现警告:

W1014 21:02:57.125363  6600 ag_logging.py:146] Entity <function Function._initialize_uninitialized_variables.<locals>.initialize_variables at 0x00000188C58C3E18> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: 
WARNING: Entity <function Function._initialize_uninitialized_variables.<locals>.initialize_variables at 0x00000188C58C3E18> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause:

1 个答案:

答案 0 :(得分:2)

两个罪魁祸首是:Dropout层,数据预处理。详细信息及其他:

    众所周知,堆叠式LSTM上的
  • Dropout会产生较差的性能,因为它引入了太多的噪声,无法稳定地提取时间相关的特征。 修复:使用recurrent_dropout
  • 如果您正在处理信号数据,或其他具有(1)个离群值的数据; (2)阶段信息; (3)频率信息-MinMaxScaler将破坏每(1)的后两个加振幅信息。 修复:使用StandardScalerQuantileTransformer
  • 考虑在Nadam上使用SGD优化器;它被证明在我的LSTM应用程序中占绝对优势,并且通常比SGD
  • 更强健
  • 考虑使用CuDNNLSTM;它可以更快地运行 10倍
  • 确保您的数据适合LSTM:(batch_size, timesteps, features)-或等效地,(samples, timesteps, channels)

警告提示:如果您确实使用recurrent_dropout,请使用activation='tanh',因为'relu'unstable


更新:真正的罪魁祸首:数据不足。 Details here