Keras后端中的自定义损失功能的理解列表

时间:2018-08-23 10:10:36

标签: python tensorflow neural-network keras

我正在尝试在Keras中实现自定义损失功能。

我在“标准” python / numpy中编写了自定义损失函数,然后意识到它需要在Tensorflow / Keras后端中。我看到在Keras后端中实现了一些简单的功能(例如均值或总和),因此我尝试对其进行翻译。但是,我不知道如何转换下面使用的理解列表或cdist函数。

python / numpy中的原始行是被注释的行,未注释的行是我尝试使用keras后端编写的行:

def loss_zhang(y_true, y_pred):

    predictions = y_pred[0]
    features = y_pred[1]

    # Parameter ~ hypersphere radius
    m = 0.5

    # Find the center of the features for the reference class
    # center = np.mean(features[np.where(y_true==1)], axis=0)
    center = K.mean(features[K.tf.where(y_true==1)], axis=0)

    # Compute the distances between all the features and the center
    # dist = cdist(features, [center], metric='euclidean')
    dist = [K.sqrt(K.sum(K.square(u - center), axis=-1)) for u in features]

    # Compute the loss for each sample (based on distance to center)
    # losses = [ofRef*d**2 + (1-ofRef)*(np.max([0, m-d]))**2 for d, ofRef in zip(dist, y_true)]
    losses = y_true*K.square(dist) + (1-y_true)*K.square(K.max([0, m-d]))

    # Total loss = sum of individual losses
    # return 0.5*np.sum(losses)
    return 0.5*K.sum(losses)

1 个答案:

答案 0 :(得分:0)

找到了:

y_true = y_true[0]
predictions = y_pred[0]
features = y_pred[1]

# Parameter ~ hypersphere radius
m = K.constant(0.35)

# Find the center of the features for the reference class
center = tf.reduce_mean(tf.gather_nd(features, K.tf.where(tf.squeeze (tf.equal(y_true, 1)))), axis=0)

# Compute the distances between all the features and the center
dist = K.sqrt(K.sum(K.square(features - center), axis=-1))

# Compute the loss for each sample (based on distance to center)
losses = y_true*K.square(dist) + (1-y_true)*K.square(tf.maximum(0., m-dist))

return 0.5*K.sum(losses)