在keras中实现变型自动编码器时发生InvalidArgumentError

时间:2019-04-17 20:04:22

标签: python keras

我正在尝试使用Chollet的深度学习书作为参考来实现变体自动编码器。我在将数据拟合到模型时遇到困难,图像形状为(512,512,3),但是这给我一个错误,即形状不兼容。我想知道错误是否出在实现变异损失的lambda层上,因为我使用的图像来自与本书不同的数据集。

  

InvalidArgumentError:不兼容的形状:[524288]与[1572864]      [[{{node custom_variational_layer_6 / logistic_loss / mul}} =

我尝试调整图像形状,通道数,重建损失,并将将KL和交叉熵损失取平均值的均值函数更改为仅交叉熵损失的和函数,似乎没有任何效果

def create_encoder():
    input_img = Input(shape=img_shape)
    x = layers.Conv2D(32, 3, padding='same', activation='relu')(input_img)
    x = layers.Conv2D(64, 3, padding='same', strides=(2, 2), activation='relu')(x)
    x = layers.Conv2D(64, 3, padding='same', activation='relu')(x)
    x = layers.Conv2D(32, 3, padding='same', activation='relu')(x)
    shape_before_flattening = K.int_shape(x)
    x = layers.Flatten()(x)
    x = layers.Dense(32, activation='relu')(x)
    z_mean = layers.Dense(latent_dim)(x)
    z_log_var = layers.Dense(latent_dim)(x)
    model = Model(input_img, [z_mean, z_log_var])
    return model, shape_before_flattening

def sample(args):
    z_mean, z_log_var = args
    eps = K.random_normal(shape=(K.shape(z_mean)[0], latent_dim),
                         mean=0., stddev=1.)
    return z_mean + K.exp(z_log_var) * eps

encoder, shape_before_flattening = create_encoder()
input_img = encoder.inputs[0]
z_mean = encoder.outputs[0]
z_log_var = encoder.outputs[1]
z = layers.Lambda(sample)([z_mean, z_log_var])

def create_decoder():
    print(K.int_shape(z))
    decoder_input = Input(K.int_shape(z)[1:])
    x = layers.Dense(np.prod(shape_before_flattening[1:]),
                     activation='relu')(decoder_input)
    x = layers.Reshape(shape_before_flattening[1:])(x)
    x = layers.Conv2DTranspose(32, 3,
                               padding='same',
                               activation='relu',
                               strides=(2, 2))(x)
    x = layers.Conv2D(1, 3,
                      padding='same',
                      activation='sigmoid')(x)
    model = Model(decoder_input, x)
    return model

class CustomVariationalLayer(keras.layers.Layer):
    def vae_loss(self, x, z_decoded): 
        x = K.flatten(x)
        z_decoded = K.flatten(z_decoded)
        xent_loss = keras.metrics.binary_crossentropy(x, z_decoded)
        kl_loss = -5e-4 * K.mean(
            1 + z_log_var - K.square(z_mean) - K.exp(z_log_var), axis=-1) 
        print(K.int_shape(xent_loss))
        print(K.int_shape(kl_loss))
        #return K.mean(mse_loss + kl_loss)
        return K.mean(xent_loss + kl_loss)

    def call(self, inputs):
        x = inputs[0]
        z_decoded = inputs[1]
        loss = self.vae_loss(x, z_decoded)
        self.add_loss(loss, inputs=inputs)
        return x

decoder = create_decoder()
z_decoded = decoder(z)
out = CustomVariationalLayer()([input_img, z_decoded])

from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)

X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
print("X train shape ", X_train.shape)
print("X test shape ", X_test.shape)

vae = Model(input_img, out)
vae.compile(optimizer='rmsprop', loss=None)
vae.summary()
vae.fit(x=X_train, y=None, shuffle=True, epochs=10, batch_size=batch_size)

我希望它能够训练模型并且不会产生错误。

0 个答案:

没有答案