错误:Tensorflow BRNN logits和标签必须大小相同

时间:2017-08-30 01:55:34

标签: tensorflow deep-learning rnn

我有这样的错误:

InvalidArgumentError (see above for traceback): logits and labels must
be same size: logits_size=[10,9] labels_size=[7040,9]  [[Node:
SoftmaxCrossEntropyWithLogits =
SoftmaxCrossEntropyWithLogits[T=DT_FLOAT,
_device="/job:localhost/replica:0/task:0/gpu:0"](Reshape, Reshape_1)]]

但是我无法找到出现此错误的张量......我认为它是由尺寸不匹配而出现的......

我的输入尺寸为batch_size * n_steps * n_input

所以,它将是10 * 704 * 100,我想要输出

batch_size * n_steps * n_classes =>它将通过双向RNN 10 * 700 * 9

我应该如何更改此代码以修复错误?

batch_size表示如下数据:

数据1:ABCABCABCAAADDD ...... ... 数据10:ABCCCCABCDBBAA ......

和 n_step表示每个数据的长度(数据用' o填充以固定每个数据的长度):704

和 n_input表示如何在每个数据中表达每个字母表的数据,如下所示: A - [1,2,1,-1,...,-1]

学习的输出应该是这样的: 输出数据1:XYZYXYZYYXY ... ... 输出数据10:ZXYYRZYZZ ......

输出的每个字母表都受到输入字母的周围和序列的影响。

learning_rate = 0.001
training_iters = 100000
batch_size = 10
display_step = 10
# Network Parameters
n_input = 100 
n_steps = 704 # timesteps
n_hidden = 50 # hidden layer num of features
n_classes = 9 

x = tf.placeholder("float", [None, n_steps, n_input])
y = tf.placeholder("float", [None, n_steps, n_classes])

weights = {
    'out': tf.Variable(tf.random_normal([2*n_hidden, n_classes]))
}
biases = {
    'out': tf.Variable(tf.random_normal([n_classes]))
}
def BiRNN(x, weights, biases):
    x = tf.unstack(tf.transpose(x, perm=[1, 0, 2]))

    # Forward direction cell
    lstm_fw_cell = rnn.BasicLSTMCell(n_hidden, forget_bias=1.0)
    # Backward direction cell
    lstm_bw_cell = rnn.BasicLSTMCell(n_hidden, forget_bias=1.0)
    # Get lstm cell output
    try:
        outputs, _, _ = rnn.static_bidirectional_rnn(lstm_fw_cell, lstm_bw_cell, x,
                                          dtype=tf.float32)
    except Exception: # Old TensorFlow version only returns outputs not states
       outputs = rnn.static_bidirectional_rnn(lstm_fw_cell, lstm_bw_cell, x,
                                    dtype=tf.float32)
    # Linear activation, using rnn inner loop last output
    return tf.matmul(outputs[-1], weights['out']) + biases['out']
pred = BiRNN(x, weights, biases)
# Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
# Evaluate model
correct_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
# Initializing the variables
init = tf.global_variables_initializer()
# Launch the graph
with tf.Session() as sess:
    sess.run(init)
    step = 1
    while step * batch_size < training_iters:
        batch_x, batch_y = next_batch(batch_size, r_big_d, y_r_big_d)
        #batch_x = batch_x.reshape((batch_size, n_steps, n_input))
        # Run optimization op (backprop)
       sess.run(optimizer, feed_dict={x: batch_x, y: batch_y})
        if step % display_step == 0:
            # Calculate batch accuracy
            acc = sess.run(accuracy, feed_dict={x: batch_x, y: batch_y})
            # Calculate batch loss
            loss = sess.run(cost, feed_dict={x: batch_x, y: batch_y})
            print("Iter " + str(step*batch_size) + ", Minibatch Loss= " + \
                  "{:.6f}".format(loss) + ", Training Accuracy= " + \
                  "{:.5f}".format(acc))
        step += 1
    print("Optimization Finished!")
    test_x, test_y = next_batch(batch_size, v_big_d, y_v_big_d)
    print("Testing Accuracy:", \
        sess.run(accuracy, feed_dict={x: test_x, y: test_y}))

1 个答案:

答案 0 :(得分:1)

static_bidirectional_rnn的第一个返回值是张量列表 - 每个步骤一个。只使用tf.matmul中的最后一个,你将失去所有其余的。相反,将它们堆叠成适当形状的单个张量,重新塑造matmul然后再变形。

outputs = tf.stack(outputs, axis=1)
outputs = tf.reshape(outputs, (batch_size*n_steps, n_hidden))
outputs = tf.matmul(outputs, weights['out']) + biases['out']
outputs = tf.reshape(outputs, (batch_size, n_steps, n_classes))

或者,您可以使用tf.einsum

outputs = tf.stack(outputs, axis=1)
outputs = tf.einsum('ijk,kl->ijl', outputs, weights['out']) + biases['out']
相关问题