内存泄漏评估CNN模型的文本分类

时间:2016-12-21 04:20:07

标签: memory-leaks tensorflow deep-learning text-classification

我一直在对这个博客中的代码做一些改编,关于CNN的文本分类: http://www.wildml.com/2015/12/implementing-a-cnn-for-text-classification-in-tensorflow/

一切正常!但是当我尝试使用训练模型来预测新实例时,它会消耗所有可用内存。在评估和反复加载所有模型时,似乎并没有解放任何内存。据我所知,每次sess.run命令后都应释放内存。

以下是我正在使用的代码的一部分:

with graph.as_default():

session_conf = tf.ConfigProto(
  allow_soft_placement=FLAGS.allow_soft_placement,
  log_device_placement=FLAGS.log_device_placement)
sess = tf.Session(config=session_conf)
with sess.as_default():

    # Load the saved meta graph and restore variables
    saver = tf.train.import_meta_graph("{}.meta".format(checkpoint_file))
    saver.restore(sess, checkpoint_file)

    # Get the placeholders from the graph by name
    input_x = graph.get_operation_by_name("input_x").outputs[0]
    # input_y = graph.get_operation_by_name("input_y").outputs[0]
    dropout_keep_prob = graph.get_operation_by_name("dropout_keep_prob").outputs[0]

    # Tensors we want to evaluate
    predictions = graph.get_operation_by_name("output/predictions").outputs[0]

    # Add a vector for probas
    probas =graph.get_operation_by_name("output/scores").outputs[0]

    # Generate batches for one epoch
    print("\nGenerating Bathces...\n")
    gc.collect()
    #mem0 = proc.get_memory_info().rss
    batches = data_helpers.batch_iter(list(x_test), FLAGS.batch_size, 1, shuffle=False)
    #mem1 = proc.get_memory_info().rss

    print("\nBatches done...\n")
    #pd = lambda x2, x1: 100.0 * (x2 - x1) / mem0
    #print "Allocation: %0.2f%%" % pd(mem1, mem0)
    # Collect the predictions here
    all_predictions = []

    all_probas = []

    for x_test_batch in batches:
        #Calculate probability of prediction been good
        gc.collect()
        batch_probas = sess.run(tf.reduce_max(tf.nn.softmax(probas),1), {input_x: x_test_batch, dropout_keep_prob: 1.0})
        batch_predictions = sess.run(predictions, {input_x: x_test_batch, dropout_keep_prob: 1.0})
        all_predictions = np.concatenate([all_predictions, batch_predictions])
        all_probas = np.concatenate([all_probas, batch_probas])
        # Add summary ops to collect data
        with tf.name_scope("eval") as scope:
            p_h = tf.histogram_summary("eval/probas", batch_probas)
            summary= sess.run(p_h)
            eval_summary_writer.add_summary(summary)

非常感谢任何帮助

干杯

1 个答案:

答案 0 :(得分:2)

您的训练循环在每次迭代中创建新的TensorFlow操作(tf.reduce_max()tf.nn.softmax()tf.histogram_summary()),这将导致随着时间的推移消耗更多的内存。当您多次运行相同的图形时,TensorFlow效率最高,因为它可以分摊多次执行优化图形的成本。因此, 为了获得最佳性能,您应该修改程序,以便在for x_test_batch in batches:循环之前创建一次中的每一个操作,然后在每次迭代中重复使用相同的操作。 / p>