如何使用带有多输出和自定义损失功能的tf.data.dataset运行keras.fit?

时间:2019-10-22 12:56:09

标签: tensorflow2.0 tf.keras

我正在使用tf2.0,并希望使用tf.kerastf.data.dataset来训练矿山网络。但是,我在将tf.keras.fittf.data.dataset与多输出和自定义损失功能一起使用时很费劲。

我的tensorflow版本是tf2.0,这是我尝试但失败的示例代码。

import tensorflow as tf
import numpy as np

# define model
inputs = tf.keras.Input((512,512,3), name='model_input')

x = tf.keras.layers.Conv2D(filters=256, kernel_size=3, padding='same',kernel_initializer=tf.random_normal_initializer(stddev=0.01), name='conv1')(inputs)
x = tf.keras.layers.Conv2D(filters=256, kernel_size=3, padding='same',kernel_initializer=tf.random_normal_initializer(stddev=0.01), name='conv2')(x)

output1 = tf.keras.layers.Conv2D(filters=256, kernel_size=3, padding='same',kernel_initializer=tf.random_normal_initializer(stddev=0.01), name='output1')(x)
output2 = tf.keras.layers.Conv2D(filters=256, kernel_size=3, padding='same',kernel_initializer=tf.random_normal_initializer(stddev=0.01), name='output2')(x)

model = tf.keras.Model(inputs, [output1, output2])

# define dataset
def parse_func(single_data): # just for example case
    input = single_data
    output1 = single_data
    output2 = single_data
    weight1 = output1
    weight2 = output2
    return input, output1, output2, weight1, weight2

def tf_parse_func(single_data):
    return tf.py_function(parse_func, [single_data], [tf.float32, tf.flaot32, tf.float32, tf.flaot32, tf.float32])

data = np.random.rand(10, 512, 512, 3)
dataset = tf.data.Dataset.from_tensor_slices(data)
dataset = dataset.map(tf_parse_func, num_parallel_calls=tf.data.experimental.AUTOTUNE)
dataset = dataset.batch(2, drop_remainder=True)

# def loss func
def loss_fn1(label, pred):
    return tf.reduce_mean(tf.keras.losses.MSE(label, pred))
def loss_fn2(label, pred):
    return tf.nn.l2_loss(label-pred)

# start training

model.compile(loss={'output1':loss_fn1, 'output2':loss_fn2},
              loss_weights={'output1':1, 'output2':2},
              optimizer=tf.keras.optimizers.Adam())

model.fit(dataset, epochs=5)


实际上,我想像这样loss_weights={'output1':1, 'output2':2}通过loss_weights={'output1':weight1, 'output2':weight2},,但我不知道该怎么做。最好将weight1/weight2作为损失函数参数传递,但我不知道该怎么做。我想loss_fn1使用output1, weight1中的dataset,而loss_fn2使用{ {1}}。

当我运行上面的代码时,出现如下错误:

output2, weight2

我尝试了很多方法和其他方法,但我发现我无法完成这项工作。那有人可以帮我吗?非常感谢!

1 个答案:

答案 0 :(得分:1)

我可以使用model.fit_generator代替model.fit来完成我的工作。这是成功运行的源代码:

import tensorflow as tf
import numpy as np

# define model
inputs = tf.keras.Input((112, 112, 3), name='model_input')

x = tf.keras.layers.Conv2D(filters=256, kernel_size=3, padding='same',
                           kernel_initializer=tf.random_normal_initializer(stddev=0.01), name='conv1')(inputs)
x = tf.keras.layers.Conv2D(filters=256, kernel_size=3, padding='same',
                           kernel_initializer=tf.random_normal_initializer(stddev=0.01), name='conv2')(x)

output1 = tf.keras.layers.Conv2D(filters=3, kernel_size=3, padding='same',
                                 kernel_initializer=tf.random_normal_initializer(stddev=0.01), name='output1')(x)
output2 = tf.keras.layers.Conv2D(filters=3, kernel_size=3, padding='same',
                                 kernel_initializer=tf.random_normal_initializer(stddev=0.01), name='output2')(x)

model = tf.keras.Model(inputs, [output1, output2])


# define dataset
def parse_func(single_data):  # just for example case
    input = single_data
    output1 = single_data
    output2 = single_data
    weight1 = output1
    weight2 = output2
    return input, output1, output2, weight1, weight2


def tf_parse_func(single_data):
    input, output1, output2, weight1, weight2 = tf.py_function(parse_func, [single_data], [tf.float32, tf.float32, tf.float32, tf.float32, tf.float32])
    return input, output1, output2, weight1, weight2


data = np.random.rand(10, 112, 112, 3).astype(np.float32)
dataset = tf.data.Dataset.from_tensor_slices(data).repeat(-1)
dataset = dataset.map(tf_parse_func, num_parallel_calls=tf.data.experimental.AUTOTUNE)
dataset = dataset.batch(2, drop_remainder=True)

def generator():
    for input, output1, output2, weight1, weight2 in dataset:
        output1 = tf.concat([output1, weight1], axis=-1)
        output2 = tf.concat([output2, weight2], axis=-1)
        yield input, [output1, output2]

# def loss func
def loss_fn1(label, pred):
    weight = label[..., 3:]
    label = label[..., :3]
    return tf.reduce_mean(tf.keras.losses.MSE(label*weight, pred*weight))


def loss_fn2(label, pred):
    weight = label[..., 3:]
    label = label[..., :3]
    return tf.nn.l2_loss(label*weight - pred*weight)


# start training

model.compile(loss={'output1': loss_fn1, 'output2': loss_fn2},
              loss_weights={'output1': 1, 'output2': 2},
              optimizer=tf.keras.optimizers.Adam())

# model.fit(dataset, epochs=5)
model.fit_generator(generator(), steps_per_epoch=10, epochs=5)

输出为:

Epoch 1/5
10/10 [==============================] - 7s 661ms/step - loss: 6814.9424 - output1_loss: 0.0673 - output2_loss: 3407.4375
Epoch 2/5
10/10 [==============================] - 7s 656ms/step - loss: 1858.2006 - output1_loss: 0.0669 - output2_loss: 929.0669
Epoch 3/5
10/10 [==============================] - 7s 658ms/step - loss: 1141.7914 - output1_loss: 0.0403 - output2_loss: 570.8755
Epoch 4/5
10/10 [==============================] - 7s 656ms/step - loss: 854.0343 - output1_loss: 0.0341 - output2_loss: 427.0001
Epoch 5/5
10/10 [==============================] - 7s 656ms/step - loss: 708.3558 - output1_loss: 0.0179 - output2_loss: 354.1689

我认为这是一个a头,但仍然希望任何人都能回答。

相关问题