Keras共享权重网络不一致

时间:2018-11-03 13:13:25

标签: python keras

我尝试在Keras中实现一个对称问题的网络-一种预测输入a和b之间距离的模型。

我使用了以下官方参考资料: 12创建以下简单实现:

from __future__ import absolute_import
from __future__ import print_function
from __future__ import absolute_import
from __future__ import print_function
import keras
from keras.models import Model
from keras.layers import Input, Flatten, Dense, Dropout
import numpy as np


def create_base_network(input_shape):
    '''Base network to be shared (eq. to feature extraction).
    '''
    input = Input(shape=input_shape)
    x = Flatten()(input)
    x = Dense(128, activation='relu')(x)
    x = Dropout(0.1)(x)
    x = Dense(128, activation='relu')(x)
    x = Dropout(0.1)(x)
    x = Dense(128, activation='relu')(x)
    return Model(input, x)

num_of_features = 256
num_of_samples = 280
data_a = np.random.random((num_of_samples, 1, num_of_features))
data_b = np.random.random((num_of_samples, 1, num_of_features))
# binary label
labels = np.random.randint(2, size=num_of_samples)

input_shape= (1, num_of_features)
# network definition
base_network = create_base_network(input_shape)

input_a = Input(shape=input_shape)
input_b = Input(shape=input_shape)

# because we re-use the same instance `base_network`, the weights of the network 
# will be shared across the two branches
encoded_a = base_network(input_a)
encoded_b = base_network(input_b)

# We can then concatenate the two vectors:
merged_vector = keras.layers.concatenate([encoded_a, encoded_b], axis=-1)

# And add a logistic regression on top
predictions = Dense(1, activation='sigmoid')(merged_vector)

# We define a trainable model linking the inputs to the predictions
model = Model(inputs=[input_a, input_b], outputs=predictions)

model.compile(optimizer='rmsprop',
              loss='binary_crossentropy',
              metrics=['accuracy'])

model.fit([data_a, data_b], labels, epochs=10)   

但是,当我使用模型评估时,当我在a和b之间切换时,我在测试集上获得了不同的指标值:

data_a_test = np.random.random((num_of_samples, 1, num_of_features))
data_b_test = np.random.random((num_of_samples, 1, num_of_features))
labels_test = np.random.randint(2, size=num_of_samples)

loss_ab, metric_ab = model.evaluate([data_a_test, data_b_test], labels_test, batch_size=32, verbose=2)
loss_ba, metric_ba = model.evaluate([data_b_test, data_a_test], labels_test, batch_size=32, verbose=2)

loss_ab: 0.9805058070591518 metric_ab: 0.48928571343421934
loss_ba: 1.0541694641113282 metric_ba: 0.5

我在这里想念什么? 请帮我一些投入...

0 个答案:

没有答案