未设置Tensorflow恢复的权重

时间:2017-03-06 21:05:08

标签: python tensorflow

我正在尝试恢复我训练过的Tensorflow中的模型。问题是看起来重量没有得到适当恢复。

对于训练,我将权重和偏差定义为:

W = {
   'h1': tf.Variable(tf.random_normal([n_inputs, n_hidden_1]), name='wh1'),
   'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2]), name='wh2'),
   'o': tf.Variable(tf.random_normal([n_hidden_2, n_classes]), name='wo')
}
b = {
   'b1': tf.Variable(tf.random_normal([n_hidden_1]), name='bh1'),
   'b2': tf.Variable(tf.random_normal([n_hidden_2]), name='bh2'),
   'o': tf.Variable(tf.random_normal([n_classes]), name='bo')
}

然后我对自己的自定义2D图像数据集进行一些培训,并通过调用tf.saver

来保存模型
saver = tf.train.Saver()
saver.save(sess, 'tf.model')

稍后我想要使用完全相同的权重恢复该模型,因此我像以前一样构建模型(也使用random_normal初始化)并调用tf.saver.restore

saver = tf.train.import_meta_graph('tf.model.meta')
saver.restore(sess, tf.train.latest_checkpoint('./'))

现在,如果我打电话:

temp = sess.run(W['h1'][0][0])
print temp

我得到随机值,而不是重量的恢复值。

我在这个问题上画了一个空白,有人能指出我正确的方向吗?

仅供参考,我已经尝试(没有)运气来简单地宣布tf.Variable,但我一直在努力:

ValueError: initial_value must be specified.

即使Tensorflow本身声明应该可以简单地声明没有初始值(https://www.tensorflow.org/programmers_guide/variables部分:恢复值)

更新1

当我按照建议运行时

all_vars = tf.global_variables()
for v in all_vars:
   print v.name

我得到以下输出:

wh1:0
wh2:0
wo:0
bh1:0
bh2:0
bo:0
wh1:0
wh2:0
wo:0
bh1:0
bh2:0
bo:0
beta1_power:0
beta2_power:0
wh1/Adam:0
wh1/Adam_1:0
wh2/Adam:0
wh2/Adam_1:0
wo/Adam:0
wo/Adam_1:0
bh1/Adam:0
bh1/Adam_1:0
bh2/Adam:0
bh2/Adam_1:0
bo/Adam:0
bo/Adam_1:0

这表明确实读取了变量。无论如何调用

print sess.run("wh1:0")

导致错误:尝试使用未初始化的值wh1

4 个答案:

答案 0 :(得分:3)

所以在你们的帮助下,我最终将保存和恢复程序的部分分成两个文件,以确保没有初始化不需要的变量。

训练和保存例程fnn.py

def build(self, topology):
    """
    Builds the topology of the model
    """

    # Sanity check
    assert len(topology) == 4

    n_inputs = topology[0]
    n_hidden_1 = topology[1]
    n_hidden_2 = topology[2]
    n_classes = topology[3]

    # Sanity check
    assert self.img_h * self.img_w == n_inputs

    # Instantiate TF Placeholders
    self.x = tf.placeholder(tf.float32, [None, n_inputs], name='x')
    self.y = tf.placeholder(tf.float32, [None, n_classes], name='y')
    self.W = {
        'h1': tf.Variable(tf.random_normal([n_inputs, n_hidden_1]), name='wh1'),
        'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2]), name='wh2'),
        'o': tf.Variable(tf.random_normal([n_hidden_2, n_classes]), name='wo')
    }
    self.b = {
        'b1': tf.Variable(tf.random_normal([n_hidden_1]), name='bh1'),
        'b2': tf.Variable(tf.random_normal([n_hidden_2]), name='bh2'),
        'o': tf.Variable(tf.random_normal([n_classes]), name='bo')
    }

    # Create model
    self.l1 = tf.nn.sigmoid(tf.add(tf.matmul(self.x, self.W['h1']), self.b['b1']))
    self.l2 = tf.nn.sigmoid(tf.add(tf.matmul(self.l1, self.W['h2']), self.b['b2']))
    logits = tf.add(tf.matmul(self.l2, self.W['o']), self.b['o'])

    # Define predict operation
    self.predict_op = tf.argmax(logits, 1)
    probs = tf.nn.softmax(logits, name='probs')

    # Define cost function
    self.cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, self.y))

    # Adding these to collection so we can restore them again
    tf.add_to_collection('inputs', self.x)
    tf.add_to_collection('inputs', self.y)
    tf.add_to_collection('outputs', logits)
    tf.add_to_collection('outputs', probs)
    tf.add_to_collection('outputs', self.predict_op)

def train(self, X, Y, n_epochs=10, learning_rate=0.001, logs_path=None):
    """
    Trains the Model
    """
    self.optimizer = tf.train.AdamOptimizer(learning_rate).minimize(self.cost)

    costs = []

    # Instantiate TF Saver
    saver = tf.train.Saver()

    with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())
        sess.run(tf.local_variables_initializer())

        # start the threads used for reading files
        coord = tf.train.Coordinator()
        threads = tf.train.start_queue_runners(sess=sess, coord=coord)

        # Compute total number of batches
        total_batch = int(self.get_num_examples() / self.batch_size)

        # start training
        for epoch in range(n_epochs):
            for i in range(total_batch):

                batch_xs, batch_ys = sess.run([X, Y])

                # run the training step with feed of images
                _, cost = sess.run([self.optimizer, self.cost], feed_dict={self.x: batch_xs,
                                                                           self.y: batch_ys})
                costs.append(cost)
                print "step %d" % (epoch * total_batch + i)
            #costs.append(cost)
            print "Epoch %d" % epoch

        saver.save(sess, self.model_file)

        temp = sess.run(self.W['h1'][0][0])
        print temp

        if self.visu:
            plt.plot(costs)
            plt.show()

        # finalize
        coord.request_stop()
        coord.join(threads)

预测例程fnn_eval.py

with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())
        sess.run(tf.local_variables_initializer())

        g = tf.get_default_graph()

        # restore the model
        self.saver = tf.train.import_meta_graph(self.model_file)
        self.saver.restore(sess, tf.train.latest_checkpoint('./tfmodels/fnn/'))

        wh1 = g.get_tensor_by_name("wh1:0")
        print sess.run(wh1[0][0])

        x, y = tf.get_collection('inputs')
        logits, probs, predict_op = tf.get_collection('outputs')

        coord = tf.train.Coordinator()
        threads = tf.train.start_queue_runners(sess=sess, coord=coord)

        predictions = []

        print Y.eval()

        for i in range(1):#range(self.get_num_examples()):
            batch_xs = sess.run(X)
            # Reshape batch_xs if only a single image is given
            #   (numpy is 4D: batch_size * heigth * width * channels)
            batch_xs = np.reshape(batch_xs, (-1, self.img_w * self.img_h))
            prediction, probabilities, logit = sess.run([predict_op, probs, logits], feed_dict={x: batch_xs})
            predictions.append(prediction[0])

        # finalize
        coord.request_stop()
        coord.join(threads)

答案 1 :(得分:0)

我猜这个问题可能是由于在恢复模型时创建一个新变量而不是获取已存在的变量。我试过这段代码

saver = tf.train.import_meta_graph('./model.ckpt-10.meta')
w1 = None
for v in tf.global_variables():
        print v.name

w1 = tf.get_variable('wh1', [])

init = tf.global_variables_initializer()
sess.run(init)

saver.restore(sess, './model.ckpt-10')

for v in tf.global_variables():
    print v.name

显然你可以看到它创建一个名为wh1_1:0的新变量的输出。

如果你试试这个

w1 = None

for v in tf.global_variables():
    print v.name
    if v.name == 'wh1:0':
        w1 = v

init = [tf.global_variables_initializer(), tf.local_variables_initializer()]
sess.run(init)

saver.restore(sess, './model.ckpt-10')

for v in tf.global_variables():
    print v.name

temp = sess.run(w1)
print temp[0][0]

没有问题。

Tensorflow建议最好像这样使用tf.variable_scope()link

with tf.variable_scope("foo"):
    v = tf.get_variable("v", [1])
with tf.variable_scope("foo", reuse=True):
    v1 = tf.get_variable("v", [1])
assert v1 == v

答案 2 :(得分:0)

将模型保存为saved_model格式时,我遇到了同样的问题。任何使用函数 add_meta_graph_and_variables 来保存模型以供服务的人都要小心这个参数" legacy_init_op:在加载恢复操作后执行的操作或操作组的传统支持。&# 34;

答案 3 :(得分:-2)

您希望将var_list传递给Saver

在您的情况下,变量列表将来自您的Wb词典:var_list = list(W.values())+list(b.values())。然后,要恢复模型,请将var_list传递给Saversaver = tf.train.Saver(var_list=var_list)

接下来,您需要获得检查点状态:model = tf.train.get_checkpoint_state(<your saved model directory>)。之后,您可以恢复训练过的重量。

var_list = list(W.values())+list(b.values())
saver = tf.train.Saver(var_list=var_list)
model = tf.get_checkpoint_state('./model/')

with tf.Session() as sess:
    saver.restore(sess,model.model_checkpoint_path)
    #Now use the pretrained weights