如何在keras中执行行方式或列方式最大池化

时间:2017-10-23 06:41:33

标签: tensorflow deep-learning keras attention-model

我正在尝试在注意层上执行行方式和列方式最大池化,如下面的链接所述: http://www.dfki.de/~neumann/ML4QAseminar2016/presentations/Attentive-Pooling-Network.pdf(slide-15)

我正在使用文本数据集,其中一个句子被送到CNN。句子的每个单词都已嵌入。它的代码如下:

model.add(Embedding(MAX_NB_WORDS, emb_dim, weights=[embedding_matrix],input_length=MAX_SEQUENCE_LENGTH, trainable=False))
model.add(Conv1D(k, FILTER_LENGTH, border_mode = "valid", activation = "relu"))    

CNN的输出形状(无,256)。这充当了关注层的输入。 任何人都可以建议如何在keras中使用tensorflow作为后端实现行方式或列方式最大池化吗?

1 个答案:

答案 0 :(得分:1)

如果您的模型中的图像具有形状(batch, width, height, channels),则可以重新整形数据以隐藏其中一个空间维度并使用一维合并:

宽度:

model.add(Reshape((width, height*channels)))
model.add(MaxPooling1D()) 
model.add(Reshape((width/2, height, channels))) #if you had an odd number, add +1 or -1 (one of them will work) 

高度:

#Here, the time distributed will consider that "width" is an extra time dimension, 
#and will simply think of it as an extra "batch" dimension
model.add(TimeDistributed(MaxPooling1D()))

工作示例,具有两个分支的功能API模型,每个分支对应一个:

import numpy as np
from keras.layers import *
from keras.models import *

inp = Input((30,50,4))
out1 = Reshape((30,200))(inp)
out1 = MaxPooling1D()(out1)
out1 = Reshape((15,50,4))(out1)
out2 = TimeDistributed(MaxPooling1D())(inp)

model = Model(inp,[out1,out2])
model.summary()

除了Reshape之外,如果您不想打扰这些数字:

#swap height and width
model.add(Permute((2,1,3)))

#apply the pooling to width
model.add(TimeDistributed(MaxPooling1D()))

#bring height and width to the correct order
model.add(Permute((2,1,3)))