在tensorflow中嵌套控制依赖关系上下文

时间:2018-03-19 08:10:40

标签: python tensorflow

运行下面的测试

from unittest import TestCase

import tensorflow as tf

class TestControl(TestCase):

  def test_control_dep(self):
    print(tf.__version__)
    a = tf.get_variable('a', initializer=tf.constant(0.0))
    d_optim = tf.assign(a, a + 2)
    g_optim = tf.assign(a, a * 2)
    with tf.control_dependencies([d_optim]):
      with tf.control_dependencies([g_optim]):
        with tf.control_dependencies([g_optim]):
          op = tf.Print(a, [a])
    with tf.Session() as sess:
      sess.run(tf.global_variables_initializer())
      sess.run(op)
      sess.run(op)
      sess.run(op)

打印(例如):

1.4.0
2018-03-18 16:58:08.943349: I C:\tf_jenkins\...\logging_ops.cc:79] [0]
2018-03-18 16:58:08.943349: I C:\tf_jenkins\...\logging_ops.cc:79] [2]
2018-03-18 16:58:08.943349: I C:\tf_jenkins\...\logging_ops.cc:79] [4]

但我也看到[2,8,10]中的其他输出。我希望它能打印[8,40,168](实际上我想确保g_optim执行两次,我不确定会这样做。为什么打印不确定,为什么它似乎不总是执行g_optim

注意:在EC2(带有tensorflow 1.6)的Ubuntu GPU服务器上运行它会一直产生0:

python3 -m unittest tf_test.TestControl.test_control_dep
1.6.0
2018-03-19 08:06:11.614220: ...
2018-03-19 08:06:12.282375: I tensorflow/core/common_runtime/gpu/gpu_device.cc:993] Creating TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 9610 MB memory) -> physical GPU (device: 0, name: Tesla K80, pci bus id: 0000:00:1e.0, compute capability: 3.7)
[0]
[0]
[0]
0.0
.
----------------------------------------------------------------------
Ran 1 test in 0.833s

OK

也许相关:

1 个答案:

答案 0 :(得分:1)

它不具有确定性,因为您创建的分配操作没有控制依赖关系,因此它们可以按任何顺序执行。

以您希望的方式执行分配,他们的操作需要在创建时具有控制依赖性。像

这样的东西
a = tf.get_variable('a', initializer=tf.constant(0.0))
with tf.control_dependencies([tf.assign(a, a + 2)]):
  with tf.control_dependencies([tf.assign(a, a * 2)]):
    with tf.control_dependencies([tf.assign(a, a * 2)]):
      op = tf.Print(a, [a])

您的代码正在做的是构建一组两个控件依赖项并将这些依赖项添加到tf.Print操作中。