关闭会话后运行Tensorflow模型测试数据

时间:2018-07-30 03:17:42

标签: python-3.x tensorflow jupyter-notebook

我有一个Convnet,我试图将其复制(而不是原始代码),仅当我在同一位置进行训练和测试时,该Convnet才能将测试数据集运行到训练后的模型中。坐下后,我只调整了几行代码以使其运行测试数据,所以我不确定可能会发生什么。我注意到“ logits_out”是数据流边缘,而不是张量板中的节点,这是因为没有自动将边缘保存在检查点中,同时也没有将其保存为节点或其他任何形式原始代码,在第一次开会后不能调用它? 这是培训阶段的一般结构:

tf.reset_default_graph()
graph = tf.Graph()

with graph.as_default():
    with tf.name_scope('1st_pool'):
        #first layer
#subsequent layers

with graph.as_default():
    #flattening, dropout, optimization, etc...
    #some summary.scalar for loss analyses
    logits_out = tf.layers.dense(flat, 1) #flat is the flattened array

    saved_1 = tf.train.Saver()
    trained_event = tf.summary.FileWriter('./CNN/train', graph=graph)

    test_event = tf.summary.FileWriter('./CNN/test', graph=graph)

    merged = tf.summary.merge_all()

with tf.Session(graph=graph) as sess:
    #training and "validating"
    sess.run(tf.global_variables_initializer())
    #running train summaries

    if step = test_round:
        #running test summaries
        saved_1.save(sess, './CNN/model_1.ckpt')

(编辑:代码粘贴错误) 这段代码在连续坐着且图形仍打开的情况下成功运行:

with tf.Session(graph=graph) as sess:

    saved_1.restore(sess, tf.train.latest_checkpoint('./CNN'))
    #
    pred = sess.run(logits_out, feed_dict={some inputs for placeholders})
    #

仅调整了两行(如下所示),以便在第二天将元文件加载到新图形中,但是在尝试运行时出现错误“未定义名称'logits_out'”在另一个地方坐坐(实际上,我尝试进行sess.run的其他变量也出现了相同的错误):

with tf.Session(graph=tf.get_default_graph()) as sess:
    saved_1 = tf.train.import_meta_graph('./CNN/model_1.ckpt.meta')
    saved_1.restore(sess, tf.train.latest_checkpoint('./CNN'))
    pred = sess.run(logits_out, feed_dict={some inputs for placeholders})
    #

编辑:我想这可能是因为第二天恢复了会话/图形之后,我错过了一个作用域-或误解了张量流如何命名东西,但我不知道如何-唯一的原因是命名为泳池。

1 个答案:

答案 0 :(得分:0)

通过今天运行这段代码来创建图形,我能够通过模型运行数据:

tf.reset_default_graph()
graph = tf.Graph()

with graph.as_default():
    with tf.name_scope('1st_pool'):
        #first layer
#subsequent layers

with graph.as_default():
    #flattening, dropout, optimization, etc...
    #some summary.scalar for loss analyses
    logits_out = tf.layers.dense(flat, 1) #flat is the flattened array

    saved_1 = tf.train.Saver()
    trained_event = tf.summary.FileWriter('./CNN/train', graph=graph)

    test_event = tf.summary.FileWriter('./CNN/test', graph=graph)

    merged = tf.summary.merge_all()

with tf.Session(graph=graph) as sess:
    #training and "validating"
    sess.run(tf.global_variables_initializer())
    #running train summaries

    if step = test_round:
        #running test summaries
        saved_1.save(sess, './CNN/model_1.ckpt')

然后运行

未编辑2行的代码:

with tf.Session(graph=graph) as sess:

    saved_1.restore(sess, tf.train.latest_checkpoint('./CNN'))
    #
    pred = sess.run(logits_out, feed_dict={some inputs for placeholders})
    #

因此,所有关于SO的要旨是,我不必使用tf.train.import_meta_graph,但是我不明白的是tf.train.import_meta_graph的用途是什么?我认为它可以导入图形并将其元数据保存在“ .meta”文件中,因此我可以避免不得不从源代码重建图形? (注意:一旦确定后,我将删除此后记问题)