如何使用特定的权重和偏差在keras中组合两个图层?

时间:2017-03-16 12:04:58

标签: tensorflow keras tflearn

我正在尝试使用Keras重写一段tflearn代码。

目标是组合两个输入,其中一个输入跳过第一层。以下代码适用于tflearn:

    # Two different inputs.
    inputs = tflearn.input_data(shape=[None, 10])
    action = tflearn.input_data(shape=[None, 10])

    #First layer used only by the inputs
    net = tflearn.fully_connected(inputs, 400, activation='relu')

    # Add the action tensor in the 2nd hidden layer
    # Use two temp layers to get the corresponding weights and biases
    t1 = tflearn.fully_connected(net, 300)
    t2 = tflearn.fully_connected(action, 300)

    # Combine the two layers using the weights from t1 and t2 and the bias from t2
    net = tflearn.activation(tf.matmul(net,t1.W) + tf.matmul(action, t2.W) + t2.b, activation='relu')

我正在尝试使用以下代码在Keras中复制此代码:

    # Two different inputs.
    inputs = tf.placeholder(tf.float32, [None, 10])
    action = tf.placeholder(tf.float32, [None, 10])

    #First layer used only by the inputs
    t1 = Sequential()
    t1.add(Dense(400, activation='relu', input_shape=(1,10)))

    # Add the action tensor in the 2nd hidden layer
    # Use two temp layers to get the corresponding weights and biases
    t1.add(Dense(300))

    t2 = Sequential()
    t2.add(Dense(300, input_shape=(1,10)))

    # Combine the two layers
    critnet = Sequential()
    critnet.add(Merge([t1, t2], mode='sum'))
    critnet.add(Activation('relu'))

    # Create the net using the inputs and action placeholder
    net = critnet([inputs, action])

keras中的代码行为不同。如何在keras中组合两个层以获得与tflearn相同的结果?

1 个答案:

答案 0 :(得分:0)

您可以使用Lambda图层将您的2个图层作为输入,并使用keras.backend以相同的方式合并它们。我认为mat.dul有K.dot。