如何在keras顺序模型中添加关注层(以及Bi-LSTM层)?

时间:2019-08-10 02:20:41

标签: python-3.x keras lstm attention-model

我正在尝试找到一种在Keras顺序模型中添加关注层的简单方法。但是,我在实现该目标时遇到了很多问题。

我是初学者,因此我选择Keras作为开始。我的任务是使用注意力模型构建Bi-LSTM。在IMDB数据集上,我建立了Bi-LSTM模型。我找到了一个名为“ keras-self-attention”(https://pypi.org/project/keras-self-attention/)的程序包,但是在keras Sequential模型中添加注意层时遇到了一些问题。

from keras.datasets import imdb
from keras.preprocessing import sequence
from keras_self_attention import SeqSelfAttention

max_features = 10000
maxlen = 500
batch_size = 32

# data
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
x_train = sequence.pad_sequences(x_train, maxlen= maxlen)
x_test = sequence.pad_sequences(x_test, maxlen=maxlen)

# model 
from keras import models
from keras import layers
from keras.layers import Dense, Embedding, LSTM


model = models.Sequential()
model.add( Embedding(max_features, 32) )
model.add( Bidirectional( LSTM(32) ) )
# add an attention layer
model3.add(SeqSelfAttention(activation='sigmoid')  )
model.add( Dense(1, activation='sigmoid') )

# compile and fit
model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['acc'])
history = model.fit(x_train, y_train, epochs=10, batch_size=128, validation_split=0.2)

上面的代码返回值错误,

ValueError                                Traceback (most recent call last)
<ipython-input-97-e6eb02d043c4> in <module>()
----> 1 history = model3.fit(x_train, y_train, epochs=10, batch_size=128, validation_split=0.2)

~/denglz/venv4re/lib/python3.6/site-packages/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, **kwargs)
    950             sample_weight=sample_weight,
    951             class_weight=class_weight,
--> 952             batch_size=batch_size)
    953         # Prepare validation data.
    954         do_validation = False

~/denglz/venv4re/lib/python3.6/site-packages/keras/engine/training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, check_array_lengths, batch_size)
    787                 feed_output_shapes,
    788                 check_batch_axis=False,  # Don't enforce the batch size.
--> 789                 exception_prefix='target')
    790 
    791             # Generate sample-wise weight values given the `sample_weight` and

~/denglz/venv4re/lib/python3.6/site-packages/keras/engine/training_utils.py in standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
    126                         ': expected ' + names[i] + ' to have ' +
    127                         str(len(shape)) + ' dimensions, but got array '
--> 128                         'with shape ' + str(data_shape))
    129                 if not check_batch_axis:
    130                     data_shape = data_shape[1:]

ValueError: Error when checking target: expected dense_7 to have 3 dimensions, but got array with shape (25000, 1)

那怎么了?我是一个必须深度学习的新人,如果您知道答案,请帮助我。

1 个答案:

答案 0 :(得分:2)

在您的代码中,注意层的输出与输入具有相同的形状(因此在这种情况下为3维)。

改为使用SeqWeightedAttention:

from keras.datasets import imdb
from keras.preprocessing import sequence
from keras_self_attention import SeqSelfAttention, SeqWeightedAttention

max_features = 10000
maxlen = 500
batch_size = 32

# data
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)

x_train = sequence.pad_sequences(x_train, maxlen= maxlen)
x_test = sequence.pad_sequences(x_test, maxlen=maxlen)

# model 
from keras import models
from keras import layers
from keras.layers import Dense, Embedding, LSTM, Bidirectional
model = models.Sequential()
# model.add( Embedding(max_features, 32,  mask_zero=True))
model.add( Embedding(max_features, 32))
model.add(Bidirectional( LSTM(32, return_sequences=True)))
# add an attention layer

# model.add(SeqSelfAttention(attention_activation='sigmoid'))
model.add(SeqWeightedAttention())

model.add( Dense(1, activation='sigmoid') )

# compile and fit
model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['acc'])
model.summary()

history = model.fit(x_train, y_train, epochs=1, batch_size=128, validation_split=0.2)

Here's the code with output.