带有“ sparse_categorical_crossentropy”的CIFAR-10 tensorflow-keras与“ sparse_softmax_cross_entropy”产生不同的结果

时间:2018-12-07 11:03:27

标签: tensorflow keras

这是在原始CIFAR-10数据上发生的,没有应用零中心和单位方差。

我通过简单的线性分类建立了keras模型

class LinearClassifier(tf.keras.Model):
  def __init__(self, num_classes, activation=tf.nn.softmax):
    super().__init__()
    initializer = tf.keras.initializers.VarianceScaling(2.0)
    self.fc = tf.layers.Dense(num_classes, kernel_initializer=initializer, activation=activation)

  def call(self, inputs, training=None, mask=None):
    inputs = tf.layers.flatten(inputs)
    return self.fc(inputs)

并使用此配置调用此模型

model.compile(optimizer=tf.train.GradientDescentOptimizer(1e-2), 
              loss=tf.keras.losses.sparse_categorical_crossentropy,
              metrics=['accuracy'])
model.fit(x_train, y_train, epochs=15, batch_size=256, validation_data=(x_val, y_val)

结果永远不会改善损耗,始终保持在14.5左右,准确率始终为10%

256/49000 [..............................] - ETA: 38s - loss: 14.6341 - acc: 0.0781
2560/49000 [>.............................] - ETA: 4s - loss: 14.6916 - acc: 0.0871 
4864/49000 [=>............................] - ETA: 2s - loss: 14.6018 - acc: 0.0933
7424/49000 [===>..........................] - ETA: 2s - loss: 14.5428 - acc: 0.0973
9728/49000 [====>.........................] - ETA: 1s - loss: 14.4951 - acc: 0.1003
12032/49000 [======>.......................] - ETA: 1s - loss: 14.5219 - acc: 0.0987
14336/49000 [=======>......................] - ETA: 1s - loss: 14.5153 - acc: 0.0992
16896/49000 [=========>....................] - ETA: 1s - loss: 14.5368 - acc: 0.0979
19456/49000 [==========>...................] - ETA: 0s - loss: 14.5320 - acc: 0.0982
22016/49000 [============>.................] - ETA: 0s - loss: 14.5334 - acc: 0.0982
24576/49000 [==============>...............] - ETA: 0s - loss: 14.5247 - acc: 0.0987
27136/49000 [===============>..............] - ETA: 0s - loss: 14.5158 - acc: 0.0993
29440/49000 [=================>............] - ETA: 0s - loss: 14.5093 - acc: 0.0997
31744/49000 [==================>...........] - ETA: 0s - loss: 14.5082 - acc: 0.0998
34048/49000 [===================>..........] - ETA: 0s - loss: 14.5026 - acc: 0.1001
36352/49000 [=====================>........] - ETA: 0s - loss: 14.5146 - acc: 0.0994
38656/49000 [======================>.......] - ETA: 0s - loss: 14.5138 - acc: 0.0994
40960/49000 [========================>.....] - ETA: 0s - loss: 14.5112 - acc: 0.0996
43008/49000 [=========================>....] - ETA: 0s - loss: 14.5026 - acc: 0.1001
45312/49000 [==========================>...] - ETA: 0s - loss: 14.5115 - acc: 0.0996
47616/49000 [============================>.] - ETA: 0s - loss: 14.5093 - acc: 0.0997
49000/49000 [==============================] - 1s 28us/step - loss: 14.5101 - acc: 0.0997 - val_loss: 14.2968 - val_acc: 0.1130

所以,我尝试使用普通的tensorflow代码进行设置

x = tf.placeholder(tf.float32, shape=x_shape)
y = tf.placeholder(tf.int32, shape=y_shape)

w = tf.get_variable('weight', shape=..., initializer=tf.variance_scaling_initializer(2.0))
b = tf.get_variable('bias', shape=..., initializer=tf.zeros_initializer)
scores = tf.matmul(input, w) + b
loss = tf.losses.sparse_softmax_cross_entropy(labels=y, logits=scores)
loss = tf.reduce_mean(loss)
optimizer = tf.train.GradientDescentOptimizer(1e-2)
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
    train_op = optimizer.minimize(loss)

并且训练循环使用相同的256批大小

结果

Starting epoch 0
Iteration 0, train loss = 352.5786, train acc = 0.0938, valid loss, 720770176.0000, valid acc = 0.1190
Iteration 20, train loss = 2322832896.0000, train acc = 0.1133, valid loss, 2136314752.0000, valid acc = 0.1130
Iteration 40, train loss = 1603235840.0000, train acc = 0.1211, valid loss, 903325504.0000, valid acc = 0.1580
Iteration 60, train loss = 564036096.0000, train acc = 0.1680, valid loss, 780342528.0000, valid acc = 0.1510
Iteration 80, train loss = 1203734400.0000, train acc = 0.2188, valid loss, 778021120.0000, valid acc = 0.2290
Iteration 100, train loss = 1254352640.0000, train acc = 0.0703, valid loss, 901918336.0000, valid acc = 0.2140
Iteration 120, train loss = 956386176.0000, train acc = 0.1484, valid loss, 838124160.0000, valid acc = 0.1710
Iteration 140, train loss = 873025728.0000, train acc = 0.1719, valid loss, 924328000.0000, valid acc = 0.1660
Iteration 160, train loss = 822196544.0000, train acc = 0.1055, valid loss, 932239744.0000, valid acc = 0.1530
Iteration 180, train loss = 683297856.0000, train acc = 0.2500, valid loss, 871729536.0000, valid acc = 0.2250

Epoch 0, train loss = 708110976.0000, train acc = 0.1731, valid loss = 666371136.0000, valid acc = 0.2570

如您所见,虽然损失难以理解,但准确性正在提高。 得到这个结果之后,我尝试研究keras代码,并发现这部分与普通的tensorflow相比似乎是额外的实现。

# Note: nn.sparse_softmax_cross_entropy_with_logits
# expects logits, Keras expects probabilities.
if not from_logits:
  epsilon_ = _to_tensor(epsilon(), output.dtype.base_dtype)
  output = clip_ops.clip_by_value(output, epsilon_, 1 - epsilon_)
  output = math_ops.log(output)

因此,我将tensorflow代码更改为也包括此部分,并应用了softmax激活功能,现在得到的结果与keras相同。

知道这一点后,我认为可能是因为损耗太低(与特征值比较(0-255之间的像素))导致梯度也具有较低的值。但是,无论学习率提高多少,权重和偏差初始化的范围如何,似乎都无法解决。损失保持在14. +左右的相同值,权重和偏见在整个训练过程中都不会改变。

有人可以解释我为什么会这样吗?如何在不更改keras模型或标准化功能的情况下解决此问题?

谢谢。

0 个答案:

没有答案
相关问题