为什么tf.train.import_meta_graph非常慢?

时间:2018-07-05 12:56:05

标签: python tensorflow

我在张量流中生成了一个图。代码(我在其中建立图形并保存的地方)如下:

import tensorflow as tf
import numpy as np
import time

input_layer = tf.placeholder(tf.float64, shape=[64, 4], name='input_layer')
output_layer = input_layer + 0.5 * tf.tanh(tf.Variable(tf.random_uniform(shape=[64, 4], \
                                                                         minval=-1, maxval=1, dtype=tf.float64)))

# random_combination is 2-d numpy array of the form:
# [[32, 34, 23, 56],[23,54,33,21],...]
random_combination = np.random.randint(64, size=(1000, 4))

# a collector to collect the values
collector = []

print('start looping')

print(time.asctime(time.localtime(time.time())))

aa = time.time()
# loop through random_combination and pick the elements of output_layer
for ii in range(len(random_combination)):
    [i, j, k, l] = [random_combination[ii][0], random_combination[ii][1], \
                    random_combination[ii][2], random_combination[ii][3]]

    # pick the needed element from output_layer
    f1 = tf.gather_nd(output_layer, [[ii, 0]])
    f2 = tf.gather_nd(output_layer, [[ii, 2]])
    f3 = tf.gather_nd(output_layer, [[ii, 3]])
    f4 = tf.gather_nd(output_layer, [[ii, 4]])

    tf1 = f1 + 1
    tf2 = f2 + 1
    tf3 = f3 + 1
    tf4 = f4 + 1
    collector.append(0.3 * tf.abs(f1 * f2 * tf3 * tf4 - tf1 * tf2 * f3 * f4))

# loss function
loss = tf.add_n(collector, name='loss')

# learning rate
learning_rate = tf.placeholder(tf.float64, name='learning_rate')

# optimizer
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate, name='optimizer').minimize(loss=loss)

saver = tf.train.Saver()

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())

    print(sess.run(loss, {input_layer: np.random.random([64, 4])}))

    saver.save(sess, 'my_test_model')

print('end looping')

print(time.time() - aa)
print(time.asctime(time.localtime(time.time())))

此代码执行大约20分钟。结果是:

start looping
Fri Jul  6 09:54:14 2018
[ 9.22197149]
end looping
114.7665855884552
Fri Jul  6 09:56:09 2018

然后我使用以下代码将其还原:

import tensorflow as tf
import time
import numpy as np

aa = time.time()
saver = tf.train.import_meta_graph('my_test_model.meta')

print(time.time() - aa)

graph = tf.get_default_graph()
input_layer = graph.get_tensor_by_name("input_layer:0")
loss = graph.get_tensor_by_name("loss:0")
learning_rate = graph.get_tensor_by_name("learning_rate:0")

print(time.time() - aa)

optimizer = graph.get_operation_by_name("optimizer")
value = np.random.random([64, 4])

print(time.time() - aa)

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())

    while (True):
        sess.run(optimizer, {input_layer: value, learning_rate: 0.01})
        check = sess.run(loss, {input_layer: value})
        print(check)

        if check[0] < 1:
            break

但是执行需要很多时间。

我知道该图表的构建大约需要20分钟,因为循环太多。但是,还原图应该只是重新读取已保存的图,没有循环。

在这种情况下,为什么恢复图形会花费那么多时间?

0 个答案:

没有答案