胶粘剂使MXNET自定义符号丢失

时间:2019-06-07 08:18:59

标签: mxnet

我写了这段代码((几乎都是来自教程,我只是修改了几行) 这是行不通的。

from mxnet import gluon 
from mxnet.gluon import nn
np.random.seed(42)
mx.random.seed(42)
ctx = mx.gpu()

def data_xform(data):
    """Move channel axis to the beginning, cast to float32, and normalize to [0, 1]."""
    return nd.moveaxis(data, 2, 0).astype('float32') / 255


# prepare data
train_data = mx.gluon.data.vision.MNIST(train=True).transform_first(data_xform)
val_data = mx.gluon.data.vision.MNIST(train=False).transform_first(data_xform)
batch_size = 100
train_loader = mx.gluon.data.DataLoader(train_data, shuffle=True, batch_size=batch_size)
val_loader = mx.gluon.data.DataLoader(val_data, shuffle=False, batch_size=batch_size)


# create network
data = mx.symbol.Variable('data')
fc1 = mx.symbol.FullyConnected(data = data, name='fc1', num_hidden=128)
act1 = mx.symbol.Activation(data = fc1, name='relu1', act_type="relu")
fc2 = mx.symbol.FullyConnected(data = act1, name = 'fc2', num_hidden = 64)
act2 = mx.symbol.Activation(data = fc2, name='relu2', act_type="relu")
fc3 = mx.symbol.FullyConnected(data = act2, name='fc3', num_hidden=10)

net= gluon.SymbolBlock(outputs=[fc3], inputs=[data])
net.initialize(ctx=ctx)


# create trainer, metric
trainer = gluon.Trainer(
    params=net.collect_params(),
    optimizer='sgd',
    optimizer_params={'learning_rate': 0.1, 'momentum':0.9, 'wd':0.00001},
)
metric = mx.metric.Accuracy()


# learn
num_epochs = 10
for epoch in range(num_epochs):
    for inputs, labels in train_loader:
        inputs = inputs.as_in_context(ctx)
        labels = labels.as_in_context(ctx)

        with autograd.record():
            outputs = net(inputs)
            # softmax
            exps = nd.exp(outputs - outputs.min(axis=1).reshape((-1,1)))
            exps = exps / exps.sum(axis=1).reshape((-1,1))
            # cross entropy
            loss = nd.MakeLoss(-nd.log(exps.pick(labels)))
            #
            #loss = gluon.loss.SoftmaxCrossEntropyLoss()(outputs, labels)
            #print(loss)

        loss.backward()
        metric.update(labels, outputs)

        trainer.step(batch_size=inputs.shape[0])

    name, acc = metric.get()
    print('After epoch {}: {} = {}'.format(epoch + 1, name, acc))
    metric.reset()

如果我使用gluon.loss.SoftmaxCrossEntropyLoss,则运行良好。.

在两种情况下都打印loss时,输出值看起来相同。

有什么区别?

感谢您的进阶

1 个答案:

答案 0 :(得分:1)

我不确定,为什么在计算outputs.min()时减去softmax。原始的softmax函数不会执行类似的操作-https://en.wikipedia.org/wiki/Softmax_function。如果不这样做,您将获得很高的准确性值:

# softmax
exps = nd.exp(outputs)
exps = exps / exps.sum(axis=1).reshape((-1, 1))
# cross entropy
loss = nd.MakeLoss(-nd.log(exps.pick(labels)))

我得到:

After epoch 1: accuracy = 0.89545
After epoch 2: accuracy = 0.9639
After epoch 3: accuracy = 0.97395
After epoch 4: accuracy = 0.9784
After epoch 5: accuracy = 0.98315