损失减少,但3D convnet的精度保持不变

时间:2018-08-08 20:12:40

标签: python conv-neural-network mxnet

我正在尝试使用mxnet用3D空间数据训练3D convnet。当我运行程序时,损耗通常会随着时间的流逝而减少,但是训练精度和测试精度却保持不变。我对神经网络非常陌生,我不知道为什么会这样。我不确定这是因为我的网络参数错误,还是我的数据未正确预处理,或者用于训练网络的代码是否有问题,或者完全是其他原因。

这是我当前用于convnet的代码及其产生的输出:

train_data = mx.gluon.data.DataLoader(train_dataset, batch_size= 64,shuffle= True, num_workers = cpucount)
test_data = mx.gluon.data.DataLoader(test_dataset,batch_size= 64,shuffle= True, num_workers = cpucount)

batch_size = 64
num_inputs = 2541
num_outputs = 2
num_fc = 635
net = gluon.nn.Sequential()

with net.name_scope():
    net.add(gluon.nn.Conv3D(channels=1, kernel_size=3, activation='relu'))
    net.add(gluon.nn.MaxPool3D(pool_size=2, strides=2))
    net.add(gluon.nn.Conv3D(channels=1, kernel_size=3, activation='relu'))
    net.add(gluon.nn.MaxPool3D(pool_size=2, strides=2))

    net.add(gluon.nn.Flatten())

    net.add(gluon.nn.Dense(num_fc, activation="relu"))
    net.add(gluon.nn.Dense(num_outputs))

net.collect_params().initialize(mx.init.Xavier(magnitude=2.24), ctx=ctx)
softmax_cross_entropy = gluon.loss.SoftmaxCrossEntropyLoss()
trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': .0001})

def evaluate_accuracy(data_iterator, net):
    acc = mx.metric.Accuracy()
    for i, (data, label) in enumerate(data_iterator):
        data = data.as_in_context(ctx)
        label = label.as_in_context(ctx)
        output = net(data)
        predictions = nd.argmax(output,axis = 1)
        label = label.reshape(len(label))
        acc.update(preds=predictions, labels=label)
    return acc.get()[1]

epochs = 100
smoothing_constant = .01

for e in range(epochs):
    for i, (data, label) in enumerate(train_data):
        data = data.as_in_context(ctx)
        label = label.as_in_context(ctx)
        with autograd.record():
            output = net(data)
            loss = softmax_cross_entropy(output, label)
        loss.backward()
        trainer.step(data.shape[0])

        curr_loss = nd.mean(loss).asscalar()
        moving_loss = (curr_loss if ((i == 0) and (e == 0))
                       else (1 - smoothing_constant) * moving_loss + smoothing_constant * curr_loss)

    test_accuracy = evaluate_accuracy(test_data, net)
    train_accuracy = evaluate_accuracy(train_data, net)

输出: 时代0.损失:26525280.32107588,Train_acc 0.462039045553,Test_acc 0.386554621849 时期1.损失:17045452.882872812,Train_acc 0.462039045553,Test_acc 0.386554621849 时期2.损失:10953605.785322478,Train_acc 0.462039045553,Test_acc 0.386554621849 时期3.损失:7038914.162310514,Train_acc 0.462039045553,Test_acc 0.386554621849 纪元4.损失:4523287.90677917,Train_acc 0.462039045553,Test_acc 0.386554621849 时代5:损失:2906717.2884657932,Train_acc 0.462039045553,Test_acc 0.386554621849 时期6.损失:1867890.253548351,Train_acc 0.462039045553,Test_acc 0.386554621849

(我省略了其余的时期,但即使损失在0.09左右,精度仍然相同)

0 个答案:

没有答案
相关问题