训练损失可用,但val_loss = nan

时间:2018-12-30 09:33:48

标签: keras neural-network

我正在尝试在U-net上应用批量标准化,并且我具有以下体系结构:

inputs = Input((IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS))
s = Lambda(lambda x: x / 255) (inputs)
width = 32
activation = 'sigmoid'
c1 = Conv2D(width, (3, 3), activation='elu', padding='same') (s)
c1 = Conv2D(width, (3, 3), activation='elu', padding='same') (c1)
c1 = BatchNormalization()(c1)
p1 = MaxPooling2D((2, 2)) (c1)
#p1 = Dropout(0.2)(p1)
c2 = Conv2D(width*2, (3, 3), activation='elu', padding='same') (p1)
c2 = Conv2D(width*2, (3, 3), activation='elu', padding='same') (c2)
c2 = BatchNormalization()(c2)
p2 = MaxPooling2D((2, 2)) (c2)
#p2 = Dropout(0.2)(p2)

c3 = Conv2D(width*4, (3, 3), activation='elu', padding='same') (p2)
c3 = Conv2D(width*4, (3, 3), activation='elu', padding='same') (c3)
c3 = BatchNormalization()(c3)
p3 = MaxPooling2D((2, 2)) (c3)
#p3 = Dropout(0.2)(p3)

c4 = Conv2D(width*8, (3, 3), activation='elu', padding='same') (p3)
c4 = Conv2D(width*8, (3, 3), activation='elu', padding='same') (c4)
c4 = BatchNormalization()(c4)
p4 = MaxPooling2D(pool_size=(2, 2)) (c4)
#p4 = Dropout(0.2)(p4)

c5 = Conv2D(width*16, (3, 3), activation='elu', padding='same') (p4)
c5 = Conv2D(width*16, (3, 3), activation='elu', padding='same') (c5)

u6 = Conv2DTranspose(width*8, (2, 2), strides=(2, 2), padding='same') (c5)
u6 = concatenate([u6, c4])
#u6 = Dropout(0.2)(u6)
c6 = Conv2D(width*8, (3, 3), activation='elu', padding='same') (u6)
c6 = Conv2D(width*8, (3, 3), activation='elu', padding='same') (c6)

u7 = Conv2DTranspose(width*4, (2, 2), strides=(2, 2), padding='same') (c6)
u7 = concatenate([u7, c3])
#u7 = Dropout(0.2)(u7)
c7 = Conv2D(width*4, (3, 3), activation='elu', padding='same') (u7)
c7 = Conv2D(width*4, (3, 3), activation='elu', padding='same') (c7)

u8 = Conv2DTranspose(width*2, (2, 2), strides=(2, 2), padding='same') (c7)
u8 = concatenate([u8, c2])
#u8 = Dropout(0.2)(u8)
c8 = Conv2D(width*2, (3, 3), activation='elu', padding='same') (u8)
c8 = Conv2D(width*2, (3, 3), activation='elu', padding='same') (c8)

u9 = Conv2DTranspose(width, (2, 2), strides=(2, 2), padding='same') (c8)
u9 = concatenate([u9, c1], axis=3)
#u9 = Dropout(0.2)(u9)
c9 = Conv2D(width, (3, 3), activation='elu', padding='same') (u9)
c9 = Conv2D(width, (3, 3), activation='elu', padding='same') (c9)

outputs = Conv2D(num_classes, (1, 1), activation=activation) (c9)
model = Model(inputs=[inputs], outputs=[outputs])

发生的情况是,训练损失非常快地达到平稳值(在2个纪元以内),而整个val损失仍然很小。我看了其他帖子,有人说这是因为尺寸顺序错误。但是,如果这是真的,那我也不应该失去训练。其他原因是该值由于学习率而降低。但是,这个原因也因我对培训有所损失而被抵消。我在做什么错了?

2 个答案:

答案 0 :(得分:0)

如果num_classes> 1,则您的激活应该是“ softmax”而不是“ Sigmoid”,那么它可能会起作用

答案 1 :(得分:0)

我没有将任何验证数据传递给fit方法!我需要做这样的事情:model.fit(X_train, Y_train, validation_split=0.1, batch_size=8, epochs=30)

相关问题