使用TimeDistributed与Keras中的循环层

时间:2017-06-21 09:17:30

标签: python keras recurrent-neural-network

我想在每个批次上运行几个不同序列的LSTM,然后加入最后的输出。以下是我一直在尝试的内容:

from keras.layers import Dense, Input, LSTM, Embedding, TimeDistributed

num_sentences = 4
num_features = 3
num_time_steps = 5

inputs = Input([num_sentences, num_time_steps])
emb_layer = Embedding(10, num_features)
embedded = emb_layer(inputs)
lstm_layer = LSTM(4)

shape = [num_sentences, num_time_steps, num_features]
lstm_outputs = TimeDistributed(lstm_layer, input_shape=shape)(embedded)

这给了我以下错误:

Traceback (most recent call last):
  File "test.py", line 12, in <module>
    lstm_outputs = TimeDistributed(lstm_layer, input_shape=shape)(embedded)
  File "/Users/erick/anaconda2/lib/python2.7/site-packages/keras/engine/topology.py", line 546, in __call__
    self.build(input_shapes[0])
  File "/Users/erick/anaconda2/lib/python2.7/site-packages/keras/layers/wrappers.py", line 94, in build
    self.layer.build(child_input_shape)
  File "/Users/erick/anaconda2/lib/python2.7/site-packages/keras/layers/recurrent.py", line 702, in build
    self.input_dim = input_shape[2]
IndexError: tuple index out of range

我尝试在input_shape中省略TimeDistributed参数,但它没有改变任何内容。

2 个答案:

答案 0 :(得分:1)

input_shape需要是LSTM层的参数,而不是TimeDistributed(它是一个包装器)。省略它一切都适合我:

from keras.layers import Dense, Input, LSTM, Embedding, TimeDistributed

num_sentences = 4
num_features = 3
num_time_steps = 5

inputs = Input([num_sentences, num_time_steps])
emb_layer = Embedding(10, num_features)
embedded = emb_layer(inputs)
lstm_layer = LSTM(4)

shape = [num_sentences, num_time_steps, num_features]
lstm_outputs = TimeDistributed(lstm_layer)(embedded)


#OUTPUT:
Using TensorFlow backend.
[Finished in 1.5s]

答案 1 :(得分:0)

在尝试了michetonu的回答并遇到同样的错误之后,我意识到我的keras版本可能已经过时了。实际上,运行keras 1.2,代码在2.0上运行良好。

相关问题