有状态的RNN在Keras功能模型中具有错误的张量形状

时间:2018-07-10 15:00:02

标签: python tensorflow keras

我定义了一个Keras功能模型,其中包含具有状态LSTM的块,如下所示:

import numpy as np
from tensorflow.python import keras


data = np.ones((1,2,3))

input_shape = data.shape  # batch size, step size, input size
output_units = 2

### input block ###
inputs = keras.layers.Input(batch_shape=input_shape, name="inputs")


### model block with stateful RNN ###
dummy_inputs_1 = keras.layers.Input(batch_shape=input_shape, name="dummy_inputs_1")
recurrent_1 = keras.layers.LSTM(units=input_shape[-1], batch_input_shape=input_shape,
                                return_sequences=True, stateful=True,
                                name="recurrent_1")(dummy_inputs_1)

dense_1 = keras.layers.Dense(output_units, batch_input_shape=(
    input_shape[0], input_shape[-1], input_shape[1]),
                             name="dense_1")
output_1 = keras.layers.TimeDistributed(dense_1, input_shape=input_shape, name="output_1")(recurrent_1)

model_1 = keras.models.Model(inputs=[dummy_inputs_1], outputs=[output_1], name="model_1")
model_1.compile(loss='mean_squared_error',
                optimizer='Nadam',
                metrics=['accuracy'])

model_1.predict(data) #works

### add model block to model ###
model_block = model_1(inputs)
model = keras.models.Model(inputs=[inputs], outputs=[model_block], name="model")
model.compile(loss='mean_squared_error',
                optimizer='Nadam',
                metrics=['accuracy'])

model_1.predict(data) #works

model.predict(data)  #fails

按照书面规定,第一个predict()调用(对包含有状态LSTM层的内部模型块的调用)工作正常,但是第二个调用失败,并出现以下错误:

 Traceback (most recent call last):
  File ".../functional_stateful.py", line 38, in <module>
    model_1.predict(data)
  File ".../local/lib/python2.7/site-packages/tensorflow/python/keras/engine/training.py", line 1478, in predict
    self, x, batch_size=batch_size, verbose=verbose, steps=steps)
  File ".../local/lib/python2.7/site-packages/tensorflow/python/keras/engine/training_arrays.py", line 363, in predict_loop
    batch_outs = f(ins_batch)
  File ".../local/lib/python2.7/site-packages/tensorflow/python/keras/backend.py", line 2897, in __call__
    fetched = self._callable_fn(*array_vals)
  File ".../local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1454, in __call__
    self._session._session, self._handle, args, status, None)
  File ".../local/lib/python2.7/site-packages/tensorflow/python/framework/errors_impl.py", line 519, in __exit__
    c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'inputs' with dtype float and shape [1,2,3]
     [[Node: inputs = Placeholder[dtype=DT_FLOAT, shape=[1,2,3], _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]

在LSTM定义中用stateful=True注释掉,整个过程运行良好。有人知道发生了什么吗?

编辑: 显然,仅在另一层上调用有状态模型块就足以导致predict()导致该块失败(即该代码因相同的错误而失败):

import numpy as np
from tensorflow.python import keras

data = np.ones((1,2,3))

input_shape = data.shape  # batch size, step size, input size
output_units = 2

### input block ###
inputs = keras.layers.Input(batch_shape=input_shape, name="inputs")


### sample model block with stateful RNN ###
dummy_inputs_1 = keras.layers.Input(batch_shape=input_shape, name="dummy_inputs_1")
recurrent_1 = keras.layers.LSTM(units=input_shape[-1], batch_input_shape=input_shape,
                                return_sequences=True, stateful=True,
                                name="recurrent_1")(dummy_inputs_1)

model_1 = keras.models.Model(inputs=[dummy_inputs_1], outputs=[recurrent_1], name="model_1")
model_1.compile(loss='mean_squared_error',
                optimizer='Nadam',
                metrics=['accuracy'])

# ### add model block to model ###
model_block = model_1(inputs)

model_1.predict(data) #fails 

修改2: 但是显然,在有状态块之前在另一个块上调用它对predict()进行调用,之后您仍然可以使用它(即,以下代码运行良好):

import numpy as np
from tensorflow.python import keras

data = np.ones((1,2,3))

input_shape = data.shape  # batch size, step size, input size
output_units = 2

### input block ###
inputs = keras.layers.Input(batch_shape=input_shape, name="inputs")


### sample model block with stateful RNN ###
dummy_inputs_1 = keras.layers.Input(batch_shape=input_shape, name="dummy_inputs_1")
recurrent_1 = keras.layers.LSTM(units=input_shape[-1], batch_input_shape=input_shape,
                                return_sequences=True, stateful=True,
                                name="recurrent_1")(dummy_inputs_1)

model_1 = keras.models.Model(inputs=[dummy_inputs_1], outputs=[recurrent_1], name="model_1")
model_1.compile(loss='mean_squared_error',
                optimizer='Nadam',
                metrics=['accuracy'])

model_1.predict(data) #works

# ### add model block to model ###
model_block = model_1(inputs)

model_1.predict(data) #works

1 个答案:

答案 0 :(得分:1)

我怀疑stateful=True RNN与多个输入不兼容。
(在您的代码中,您有dummy_inputs_1inputs。这在许多消息中被keras称为“多个入站节点”。实际上,您在那里有两个并行分支,其中一个与原始{ {1}},关于新的dummy_inputs_1

那是为什么? inputs层旨在接收“时间序列”(或批处理中的许多“平行”序列),该时间序列分为多个时间步。

当收到批次2时,它将解释为序列1关于时间步长的续集。

当您有两个输入张量时,RNN应该如何解释什么继续下去?您将失去“连续序列”的一致性。该层仅具有“一个状态张量”,因此无法跟踪平行张量。

因此,如果要使用一个以上有状态输入的有状态RNN,建议您创建该层的副本。如果希望它们共享相同的权重,则可能需要使用具有相同权重张量的自定义图层。

现在,如果您打算一次使用此块,则可能应该使用stateful=Truemodel_1.input而不是提供另一个输入张量。