将自定义可训练参数乘以隐藏单位

时间:2018-10-22 05:10:46

标签: python tensorflow keras nlp lstm

我想对LSTM层之后的张量/输出之间的差异求平方,并将其乘以可训练的参数。

正如@rvinas所指出的,我为此目的尝试编写自己的图层,

class MyLayer(Layer):
    def __init__(self,W_regularizer=None,W_constraint=None, **kwargs):
        self.init = initializers.get('glorot_uniform')
        self.W_regularizer = regularizers.get(W_regularizer)
        self.W_constraint = constraints.get(W_constraint)
        super(MyLayer, self).__init__(**kwargs)

    def build(self, input_shape):
        assert len(input_shape) == 3
        # Create a trainable weight variable for this layer.
        self.W = self.add_weight((input_shape[-1],),
                                 initializer=self.init,
                                 name='{}_W'.format(self.name),
                                 regularizer=self.W_regularizer,
                                 constraint=self.W_constraint,
                                trainable=True)
        super(MyLayer, self).build(input_shape)  

调用函数仅将我已初始化的张量和权重相乘。仍然我需要找到如何计算成对差异并将它们平方。

    def call(self, x):
        uit = K.dot(x, self.W)
        return uit

    def compute_output_shape(self, input_shape):
        return input_shape[0], input_shape[-1]

但是后来我在AssertionError得到了assert len(input_shape) >= 3

我要执行:

from keras.layers import Input, Lambda, LSTM
from keras.models import Model
import keras.backend as K
from keras.layers import Lambda

lstm=LSTM(128, return_sequences=True)(input)
something=MyLayer()(lstm)

1 个答案:

答案 0 :(得分:0)

如果我错了,请纠正我,但是我相信这是你打算做的事情

def call(self, x):
    diff = K.square(x[:, 1:, :] - x[:, :-1, :])  # Shape=(batch_size, nb_timesteps-1, hidden_dim)
    uit = K.dot(diff, self.W[:, None])  # Shape=(batch_size, nb_timesteps-1, 1)
    uit = K.squeeze(uit, axis=-1)  # Shape=(batch_size, nb_timesteps-1)
    return uit

注意:未经测试。

相关问题