keras可以在自定义指标中使用sklearn来创建微型f1_score

时间:2018-07-27 08:47:58

标签: python keras

我在stackoverflow中找到了一个版本

from keras import backend as K

def f1(y_true, y_pred):
    def recall(y_true, y_pred):
        """Recall metric.

        Only computes a batch-wise average of recall.

        Computes the recall, a metric for multi-label classification of
        how many relevant items are selected.
        """
        true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
        possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
        recall = true_positives / (possible_positives + K.epsilon())
        return recall

    def precision(y_true, y_pred):
        """Precision metric.

        Only computes a batch-wise average of precision.

        Computes the precision, a metric for multi-label classification of
        how many selected items are relevant.
        """
        true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
        predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
        precision = true_positives / (predicted_positives + K.epsilon())
        return precision
    precision = precision(y_true, y_pred)
    recall = recall(y_true, y_pred)
    return 2*((precision*recall)/(precision+recall+K.epsilon()))


model.compile(loss='binary_crossentropy',
          optimizer= "adam",
          metrics=[f1])

但是我可以在创建自定义指标时使用sklearn f1_score吗? 我想使用f1_score宏和f1_score micro的平均值,有人可以帮我吗?谢谢

1 个答案:

答案 0 :(得分:0)

我认为您可以在训练每批产品时使用上面显示的代码。因为这是在计算每个批次的F1分数,所以您可以在终端上看到打印的日志。


1/13 [=> .....................................]-ETA:4秒-损失:0.2646-f1:0.2927

2/13 [===> ......................................]-ETA:4秒-损失:0.2664-f1:0.1463

...

13/13 [==============================]-7秒505ms /步-损耗:0.2615-f1: 0.1008-val_loss:0.2887-val_f1:0.1464


如果您使用fit方法并想计算每个纪元的F1,则应尝试如下所示的代码。

class Metrics(Callback):
'''
Defined your personal callback
'''
def on_train_begin(self, logs={}):
    self.val_f1s = []
    self.val_recalls = []
    self.val_precisions = []

def on_epoch_end(self, epoch, logs={}):
    #         val_predict = (np.asarray(self.model.predict(self.validation_data[0]))).round()
    val_predict = np.argmax(np.asarray(self.model.predict(self.validation_data[0])), axis=1)
    #         val_targ = self.validation_data[1]
    val_targ = np.argmax(self.validation_data[1], axis=1)
    _val_f1 = f1_score(val_targ, val_predict, average='macro')
    #         _val_recall = recall_score(val_targ, val_predict)
    #         _val_precision = precision_score(val_targ, val_predict)
    self.val_f1s.append(_val_f1)
    #         self.val_recalls.append(_val_recall)
    #         self.val_precisions.append(_val_precision)
    #         print('— val_f1: %f — val_precision: %f — val_recall %f' %(_val_f1, _val_precision, _val_recall))
    print(' — val_f1:', _val_f1)
    return

使用回调拟合方法。

metrics = Metrics()

model.fit_generator(generator=generator_train,
                          steps_per_epoch=len(generator_train),
                          validation_data=generator_val,
                          validation_steps=len(generator_val),
                          epochs=epochs,
                          callbacks=[metrics])

有一些提示要注意: 如果使用fit_generator()方法进行训练,则只能使用显示的代码。否则,如果使用fit()方法,则可以尝试使用回调函数。

全部!

相关问题