验证准确性没有意义

时间:2017-12-07 22:12:30

标签: machine-learning neural-network deep-learning keras conv-neural-network

我的数据集如下: 训练集:5589张图片 验证集:1398图像 测试集:1996年图像 大小:1156,256,1 问题是二元分类问题。我使用热编码目标数组[0,1],[1,0]得到了一些结果(在测试集中达到约83%的准确度)。实现这是多么愚蠢我将目标数组更改为二进制形式[0]或1并将categorical_crossentropy更改为二进制交叉熵。

采用这种方法,无论我使用何种学习率,验证准确率都会达到82.05%,并且训练准确率达到25.80%。当然,这没有任何意义,并且在测试集中的准确度约为30%左右。

为什么会发生这种情况?我检查了训练数据和元数据,它们是正确的。我在下面发布了我的代码。

inp = Input(shape=input_shape)
out = Conv2D(16, (5, 5),activation = 'relu', kernel_initializer='glorot_uniform', kernel_regularizer=regularizers.l2(0.01), padding='same')(inp)
out = MaxPooling2D(pool_size=(2, 2))(out)
out = Dropout(0.5)(out)

out = Conv2D(32, (3, 3),activation = 'relu',kernel_initializer='glorot_uniform',kernel_regularizer=regularizers.l2(0.01), padding='same')(out)
out = MaxPooling2D(pool_size=(2, 2))(out)
out = Dropout(0.5)(out)

out = Conv2D(32, (3, 3),activation = 'relu',kernel_initializer='glorot_uniform',kernel_regularizer=regularizers.l2(0.01), padding='same')(out)
out = Dropout(0.5)(out)

out = Conv2D(64, (3, 3), activation = 'relu',kernel_initializer='glorot_uniform',kernel_regularizer=regularizers.l2(0.01), padding='same')(out)
out = Conv2D(64, (3, 3),activation = 'relu', kernel_initializer='glorot_uniform',kernel_regularizer=regularizers.l2(0.01), padding='same')(out)
out = MaxPooling2D(pool_size=(2, 2))(out)

out = Conv2D(128, (3, 3), activation = 'relu',kernel_initializer='glorot_uniform',kernel_regularizer=regularizers.l2(0.01), padding='same')(out)
out = Conv2D(128, (3, 3),activation = 'relu', kernel_initializer='glorot_uniform',kernel_regularizer=regularizers.l2(0.01), padding='same')(out)
out = MaxPooling2D(pool_size=(2, 2))(out)

out = Conv2D(256, (3, 3),activation = 'relu', kernel_initializer='glorot_uniform',kernel_regularizer=regularizers.l2(0.01), padding='same')(out)
out = Conv2D(256, (3, 3), activation = 'relu',kernel_initializer='glorot_uniform',kernel_regularizer=regularizers.l2(0.01), padding='same')(out)
out = MaxPooling2D(pool_size=(2, 2))(out)
out = Conv2D(512, (3, 3), activation = 'relu',kernel_initializer='glorot_uniform', kernel_regularizer=regularizers.l2(0.01), padding='same')(out)
out = MaxPooling2D(pool_size=(2, 2))(out)

out = Flatten()(out)
out = Dropout(0.5)(out)
dense1 = Dense(1, activation="softmax")(out)
model = Model(inputs = inp, outputs = dense1)

时代看起来像这样: Epochs

1 个答案:

答案 0 :(得分:0)

将上次激活从softmax更改为sigmoid,例如

dense1 = Dense(1, activation="sigmoid")(out)

尝试降低学习率

model.compile(optimizer=Adam(lr=0.0001), loss='binary_crossentropy',  metrics=['accuracy'])  
相关问题