从logits获取概率 - logits和标签大小不同

时间:2017-06-20 05:07:05

标签: tensorflow

我正在尝试使用Tensorflow对一些对象表示进行分类。我使用与Tensorflow Cifar-10示例相同的架构,最后一层定义为:

    with tf.variable_scope('sigmoid_linear') as scope:
    weights = _variable_with_weight_decay('weights', [192, num_classes],
                                               stddev=1 / 192.0, wd=0.0)
    biases = _variable_on_cpu('biases', [num_classes],
                                   initializer)
    sigmoid_linear = tf.add(tf.matmul(local4, weights), biases, name=scope.name)
    _activation_summary(sigmoid_linear)

return sigmoid_linear

在我的情况下,num_classes2,并且馈送到神经网络的表示中的通道数量是8.此外,我目前仅使用5个示例进行调试。最后一层的输出的形状为[40,2]。我希望第一个维度归因于5 examples * 8 channels,第二个维度归因于类的数量。

使用例如比较logits和标签。 tensorflow.nn.SparseSoftmaxCrossEntropyWithLogits我需要它们具有共同的形状。如何解释当前形状中当前logits的内容,以及如何将logits的第一个维度减少为与num_classes相同?

编辑:推理函数的输入形状的形状为[5,101,1008,8]。推理函数定义为:

    def inference(representations):
    """Build the model.
  Args:
    STFT spectra: spectra returned from distorted_inputs() or inputs().
  Returns:
    Logits.
  """    
    # conv1
    with tf.variable_scope('conv1') as scope:
        kernel = _variable_with_weight_decay('weights',
                                                  shape=[5, 5, nChannels, 64],
                                                  stddev=5e-2,
                                                  wd=0.0)
        conv = tf.nn.conv2d(representations, kernel, [1, 1, 1, 1], padding='SAME')
        biases = _variable_on_cpu('biases', [64], initializer,
                                  )
        pre_activation = tf.nn.bias_add(conv, biases)
        conv1 = tf.nn.relu(pre_activation, name=scope.name)
        _activation_summary(conv1)

    # pool1
    pool1 = tf.nn.max_pool(conv1, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1],
                           padding='SAME', name='pool1')
    # norm1
    norm1 = tf.nn.lrn(pool1, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75,
                      name='norm1')

    # conv2
    with tf.variable_scope('conv2') as scope:
        kernel = _variable_with_weight_decay('weights',
                                                  shape=[5, 5, 64, 64],
                                                  stddev=5e-2,
                                                  wd=0.0)
        conv = tf.nn.conv2d(norm1, kernel, [1, 1, 1, 1], padding='SAME')
        biases = _variable_on_cpu('biases', [64], initializer)
        pre_activation = tf.nn.bias_add(conv, biases)
        conv2 = tf.nn.relu(pre_activation, name=scope.name)
        _activation_summary(conv2)

    # norm2
    norm2 = tf.nn.lrn(conv2, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75,
                      name='norm2')
    # pool2
    pool2 = tf.nn.max_pool(norm2, ksize=[1, 3, 3, 1],
                           strides=[1, 2, 2, 1], padding='SAME', name='pool2')

    # local3
    with tf.variable_scope('local3') as scope:
        # Move everything into depth so we can perform a single matrix multiply.
        reshape = tf.reshape(pool2, [batch_size, -1])
        dim = reshape.get_shape()[1].value
        weights = _variable_with_weight_decay('weights', shape=[dim, 384],
                                                   stddev=0.04, wd=0.004)
        biases = _variable_on_cpu('biases', [384], initializer)
        local3 = tf.nn.relu(tf.matmul(reshape, weights) + biases, name=scope.name)
        _activation_summary(local3)

    # local4
    with tf.variable_scope('local4') as scope:
        weights = _variable_with_weight_decay('weights', shape=[384, 192],
                                                   stddev=0.04, wd=0.004)
        biases = _variable_on_cpu('biases', [192], initializer)
        local4 = tf.nn.relu(tf.matmul(local3, weights) + biases, name=scope.name)
        _activation_summary(local4)


    with tf.variable_scope('sigmoid_linear') as scope:
        weights = _variable_with_weight_decay('weights', [192, num_classes],
                                                   stddev=1 / 192.0, wd=0.0)
        biases = _variable_on_cpu('biases', [num_classes],
                                       initializer)
        sigmoid_linear = tf.add(tf.matmul(local4, weights), biases, name=scope.name)
        _activation_summary(sigmoid_linear)

    return sigmoid_linear

1 个答案:

答案 0 :(得分:0)

经过更多调试后,我发现了问题。带有图层的已发布代码最初来自Tensorflow教程,效果很好(当然可以)。我在每一层之后打印了所有形状,发现数字40不是5 examples * 8 channels引起的,而是我之前设置的batch_size = 40,因此也高于训练样例。在local layer 3重塑之后,不匹配开始了。这个问题现在可以结束了。