tape.gradient()无法运行两次?

时间:2019-07-14 04:03:36

标签: tensorflow

我正在尝试使用tape.gradient,问题是当我第二次运行该代码而未在jupyter中重置内核时,它将显示错误此错误:

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-19-d8e5c691e377> in <module>
     21     # Backward operations
     22     # 1. we will take gradient of loss w.r.t W, b, and V
---> 23     gW, gb, gV = tape.gradient(loss, sources=[W, b, V])
     24     if _ ==1:
     25         print(gW)

~/miniconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/eager/backprop.py in gradient(self, target, sources, output_gradients, unconnected_gradients)
    978         output_gradients=output_gradients,
    979         sources_raw=flat_sources_raw,
--> 980         unconnected_gradients=unconnected_gradients)
    981 
    982     if not self._persistent:

~/miniconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/eager/imperative_grad.py in imperative_grad(tape, target, sources, output_gradients, sources_raw, unconnected_gradients)
     74       output_gradients,
     75       sources_raw,
---> 76       compat.as_str(unconnected_gradients.value))

AttributeError: 'RefVariable' object has no attribute '_id'

我试图通过输出变量来检查发生了什么,我发现在第二次运行时变量W不能被随机初始化。我想这和渐变有关吗?我不知道,我是这个tensorflow的新手。

我试图重置default_graph,但是没有任何改变。

tf.enable_eager_execution()
learning_rate = 0.15  # learning rate for SGD
EPOCHS = 35
HIDDEN = 32

# use tf.Variable to declare things we will differentiate with respect to

W = tf.Variable(np.random.normal(size=(2, HIDDEN)) / np.sqrt(HIDDEN))
b = tf.Variable(np.zeros(shape=(HIDDEN, )))
V = tf.Variable(np.random.normal(size=(HIDDEN, 1)) / np.sqrt(HIDDEN))

losses = []
for i in range(EPOCHS):
    # instantiate the tape that will be used to record operations
    with tf.GradientTape() as tape:
        # Forward operations
        h = tf.nn.relu(X @ W + b) #compute the output of hidden layer
        y_pred = h @ V #compute the output of output layer
        loss = 0.5 * tf.reduce_sum((y_pred - y)**2)
    losses.append(loss)

    # Backward operations
    # 1. we will take gradient of loss w.r.t W, b, and V
    gW, gb, gV = tape.gradient(loss, sources=[W, b, V])
    if _ ==1:
        print(gW)
    # 2. update each of the parameters 
    W.assign_sub(learning_rate * gW)
    b.assign_sub(learning_rate * gb)
    V.assign_sub(learning_rate * gV)

print("Predictions after training:\n{}".format(y_pred))
print(losses)
plt.plot(losses)

0 个答案:

没有答案
相关问题