如何在Tensorflow中训练RNN语言模型时计算准确性?

时间:2019-01-29 12:08:21

标签: python tensorflow lstm

我在这里使用此字级RNN语言模型:https://github.com/hunkim/word-rnn-tensorflow

如何计算每个时期的RNN模型的准确性。

以下是训练中的代码,它显示每个时期的训练损失和其他情况:

for e in range(model.epoch_pointer.eval(), args.num_epochs):
        sess.run(tf.assign(model.lr, args.learning_rate * (args.decay_rate ** e)))
        data_loader.reset_batch_pointer()
        state = sess.run(model.initial_state)
        speed = 0
        if args.init_from is None:
            assign_op = model.epoch_pointer.assign(e)
            sess.run(assign_op)
        if args.init_from is not None:
            data_loader.pointer = model.batch_pointer.eval()
            args.init_from = None
        for b in range(data_loader.pointer, data_loader.num_batches):
            start = time.time()
            x, y = data_loader.next_batch()
            feed = {model.input_data: x, model.targets: y, model.initial_state: state,
                    model.batch_time: speed}
            summary, train_loss, state, _, _ = sess.run([merged, model.cost, model.final_state,
                                                         model.train_op, model.inc_batch_pointer_op], feed)
            train_writer.add_summary(summary, e * data_loader.num_batches + b)
            speed = time.time() - start
            if (e * data_loader.num_batches + b) % args.batch_size == 0:
                print("{}/{} (epoch {}), train_loss = {:.3f}, time/batch = {:.3f}" \
                    .format(e * data_loader.num_batches + b,
                            args.num_epochs * data_loader.num_batches,
                            e, train_loss, speed))
            if (e * data_loader.num_batches + b) % args.save_every == 0 \
                    or (e==args.num_epochs-1 and b == data_loader.num_batches-1): # save for the last result
                checkpoint_path = os.path.join(args.save_dir, 'model.ckpt')
                saver.save(sess, checkpoint_path, global_step = e * data_loader.num_batches + b)
                print("model saved to {}".format(checkpoint_path))
train_writer.close()

2 个答案:

答案 0 :(得分:1)

由于该模型同时具有每个类别的目标概率和预测概率。 您可以减小概率张量以保持最高概率的类索引。

predictions = tf.cast(tf.argmax(model.probs, axis=2), tf.int32)

然后您可以与目标进行比较,以了解目标是否成功预测

correct_preds = tf.equal(predictions, model.targets)

最后,准确度是正确预测与输入大小之间的比率,也就是该布尔张量的平均值。

accuracy = tf.reduce_mean(tf.cast(correct_preds, tf.float32))

答案 1 :(得分:0)

您还可以使用Tensorflow的tf.metrics.accuracy函数。

accuracy, accuracy_update_op  = tf.metrics.accuracy(labels = tf.argmax(y, axis = 2), predictions = tf.argmax(predictions, axis = 2), name = 'accuracy')
running_vars_accuracy = tf.get_collection(tf.GraphKeys.LOCAL_VARIABLES, scope="LSTM/Accuracy")

accuracy_update_op操作将更新每个批次中的两个局部变量:

[<tf.Variable 'accuracy/total:0' shape=() dtype=float32_ref>,
 <tf.Variable 'accuracy/count:0' shape=() dtype=float32_ref>]

然后,只需调用accuracy op即可在每个时期打印出总体准确性:

for epoch in range(num_epochs):
    avg_cost_train = 0.
    total_train_batch = int((len(X_train)/(batch_size)) + 1)

    running_vars_initializer_accuracy.run()
    for _ in range(total_train_batch):
        _, miniBatchCost_train, miniBatchAccuracy_train = sess.run([trainer, loss, accuracy_update_op], feed_dict = {X: Xtrain, y: ytrain})
        avg_cost_train += miniBatchCost_train / total_train_batch
    accuracy_train = sess.run(accuracy)

这里要注意的是,不要在同一tf_metric函数调用中调用tf_metric_updatesession.run()