手动计算的验证损失与使用正则化时报告的val_loss不同

时间:2018-12-10 23:47:42

标签: python tensorflow keras

当我在自定义回调中手动计算验证损失时,结果与使用L2内核正则化时keras报告的不同。

示例代码:

class ValidationCallback(Callback):
    def __init__(self, validation_x, validation_y):
        super(ValidationCallback, self).__init__()
        self.validation_x = validation_x
        self.validation_y = validation_y

    def on_epoch_end(self, epoch, logs=None):
        # What am I missing in this loss calculation that keras is doing?
        validation_y_predicted = self.model.predict(self.validation_x)
        print("My validation loss: %.4f" % K.eval(K.mean(mean_squared_error(self.validation_y, validation_y_predicted))))


input = Input(shape=(1024,))
hidden = Dense(1024, kernel_regularizer=regularizers.l2())(input)
output = Dense(1024, kernel_regularizer=regularizers.l2())(hidden)

model = Model(inputs=[input], outputs=output)

optimizer = RMSprop()
model.compile(loss='mse', optimizer=optimizer)

model.fit(x=x_train,
          y=y_train,
          callbacks=[ValidationCallback(x_validation, y_validation)],
          validation_data=(x_validation, y_validation))

打印:

  

10000/10000 [=============================]-2秒249us / step-损耗:1.3125-val_loss: 0.1250   我的验证损失:0.0861

我该怎么做才能在回调中计算出相同的验证损失?

1 个答案:

答案 0 :(得分:1)

这是预期的行为。 L2正则化通过添加惩罚项(权重平方和)来修改损失函数,以减少泛化误差。

要在回调中计算相同的验证损失,您将需要从每一层获取权重并计算其平方和。 regularizers.l2中的参数l是每一层的正则化系数。

话虽如此,您可以按照以下方式匹配示例的验证损失:

from keras.layers import Dense, Input
from keras import regularizers
import keras.backend as K
from keras.losses import mean_squared_error
from keras.models import Model
from keras.callbacks import Callback
from keras.optimizers import RMSprop
import numpy as np


class ValidationCallback(Callback):
    def __init__(self, validation_x, validation_y, lambd):
        super(ValidationCallback, self).__init__()
        self.validation_x = validation_x
        self.validation_y = validation_y
        self.lambd = lambd

    def on_epoch_end(self, epoch, logs=None):
        validation_y_predicted = self.model.predict(self.validation_x)

        # Compute regularization term for each layer
        weights = self.model.trainable_weights
        reg_term = 0
        for i, w in enumerate(weights):
            if i % 2 == 0:  # weights from layer i // 2
                w_f = K.flatten(w)
                reg_term += self.lambd[i // 2] * K.sum(K.square(w_f))

        mse_loss = K.mean(mean_squared_error(self.validation_y, validation_y_predicted))
        loss = mse_loss + K.cast(reg_term, 'float64')

        print("My validation loss: %.4f" % K.eval(loss))


lambd = [0.01, 0.01]
input = Input(shape=(1024,))
hidden = Dense(1024, kernel_regularizer=regularizers.l2(lambd[0]))(input)
output = Dense(1024, kernel_regularizer=regularizers.l2(lambd[1]))(hidden)
model = Model(inputs=[input], outputs=output)
optimizer = RMSprop()
model.compile(loss='mse', optimizer=optimizer)

x_train = np.ones((2, 1024))
y_train = np.random.rand(2, 1024)
x_validation = x_train
y_validation = y_train

model.fit(x=x_train,
          y=y_train,
          callbacks=[ValidationCallback(x_validation, y_validation, lambd)],
          validation_data=(x_validation, y_validation))