为什么在训练时准确度和损失保持不变?

时间:2019-09-03 12:24:12

标签: python tensorflow keras neural-network classification

因此,我尝试修改https://www.tensorflow.org/tutorials/keras/basic_classification中的入门教程,以使用我自己的数据。目标是对狗和猫的图像进行分类。该代码非常简单,下面给出。问题在于网络似乎根本无法学习,训练损失和准确性在每个时期后都保持不变。

图像(X_training)和标签(y_training)的格式似乎正确: X_training.shape返回:(18827, 80, 80, 3)

y_training是一维列表,其条目位于{0,1}

我已经多次检查,X_training中的“图像”是否正确标记为: 假设X_training[i,:,:,:]代表狗,那么y_training[i]将返回1,如果X_training[i,:,:,:]代表猫,则y_training[i]将返回0。

下面显示的是没有导入语句的完整python文件。

#loading the data from 4 pickle files:
pickle_in = open("X_training.pickle","rb")
X_training = pickle.load(pickle_in)

pickle_in = open("X_testing.pickle","rb")
X_testing = pickle.load(pickle_in)

pickle_in = open("y_training.pickle","rb")
y_training = pickle.load(pickle_in)

pickle_in = open("y_testing.pickle","rb")
y_testing = pickle.load(pickle_in)


#normalizing the input data:
X_training = X_training/255.0
X_testing = X_testing/255.0


#building the model:
model = keras.Sequential([
    keras.layers.Flatten(input_shape=(80, 80,3)),
    keras.layers.Dense(128, activation=tf.nn.relu),
    keras.layers.Dense(1,activation='sigmoid')
])
model.compile(optimizer='adam',loss='mean_squared_error',metrics=['accuracy'])


#running the model:
model.fit(X_training, y_training, epochs=10)

该代码编译并训练了10个纪元,但是损失和准确性都没有改善,它们在每个纪元后都保持不变。 该代码可以与本教程中使用的MNIST-fashion数据集很好地配合使用,但略有变化,可以解释多类与二进制分类和输入形状的差异。

2 个答案:

答案 0 :(得分:1)

如果您要训练分类模型,则在失去功能时必须具有binary_crossentropy,而不是用于回归任务的mean_squared_error

替换

model.compile(optimizer='adam',loss='mean_squared_error',metrics=['accuracy'])

使用

model.compile(optimizer='adam',loss='binary_crossentropy',metrics=['accuracy'])

此外,我建议您不要在密集层上使用relu激活,而应该使用linear

替换

keras.layers.Dense(128, activation=tf.nn.relu),

使用

keras.layers.Dense(128),

为了更好地利用神经网络的力量,请在您的convolutional layers之前先使用flatten layer

答案 1 :(得分:1)

我发现了一种工作方式稍微复杂一点的实现方式。 这是没有导入语句的完整代码:

#global variables:
batch_size = 32
nr_of_epochs = 64
input_shape = (80,80,3)


#loading the data from 4 pickle files:
pickle_in = open("X_training.pickle","rb")
X_training = pickle.load(pickle_in)

pickle_in = open("X_testing.pickle","rb")
X_testing = pickle.load(pickle_in)

pickle_in = open("y_training.pickle","rb")
y_training = pickle.load(pickle_in)

pickle_in = open("y_testing.pickle","rb")
y_testing = pickle.load(pickle_in)



#building the model
def define_model():
    model = Sequential()
    model.add(Conv2D(32, (3, 3), activation='relu', input_shape=input_shape))
    model.add(MaxPooling2D((2, 2)))
    model.add(Flatten())
    model.add(Dense(128, activation='relu'))
    model.add(Dense(1, activation='sigmoid'))
    # compile model
    model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
    return model
model = define_model()


#Possibility for image data augmentation
train_datagen = ImageDataGenerator(rescale=1.0/255.0)
val_datagen = ImageDataGenerator(rescale=1./255.) 
train_generator =train_datagen.flow(X_training,y_training,batch_size=batch_size)
val_generator = val_datagen.flow(X_testing,y_testing,batch_size= batch_size)



#running the model
history = model.fit_generator(train_generator,steps_per_epoch=len(X_training) //batch_size,
                              epochs=nr_of_epochs,validation_data=val_generator,
                              validation_steps=len(X_testing) //batch_size)