将两个张量相乘

时间:2017-06-26 10:26:24

标签: python tensorflow

我在Tensorflow中有两个张量,它们有以下两种形状:

print(tf.valid_dataset.get_shape())
print(weights1.get_shape())

结果如何:

(10000, 784)
(784, 1024)

然而,如果我试图将它们相乘,就像这样:

tf.matmul(tf_valid_dataset, weights1)

我明白了:

Tensor("Variable:0", shape=(784, 1024), dtype=float32_ref) must be from the same graph as Tensor("Const:0", shape=(10000, 784), dtype=float32).

由于我将它们放在它们都具有784大小的一侧,这对我来说似乎是正确的。

知道可能出现什么问题吗?

编辑:

我在打印语句之前的代码是:

num_hidden_nodes=1024
batch_size = 128
learning_rate = 0.5

graph = tf.Graph()
with graph.as_default():
    tf_train_dataset = tf.placeholder(tf.float32, shape=(batch_size, image_size*image_size))
    tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
    tf.valid_dataset = tf.constant(valid_dataset) 
    tf.test_dataset = tf.constant(test_dataset)

    weights1 = tf.Variable(tf.truncated_normal([image_size * image_size, num_hidden_nodes]))
    biases1 = tf.Variable(tf.zeros([num_hidden_nodes]))
    weights2 = tf.Variable(tf.truncated_normal([num_hidden_nodes, num_labels]))
    biases2 = tf.Variable(tf.zeros([num_labels]))
    weights = [weights1, biases1, weights2, biases2]

    lay1_train = tf.nn.relu(tf.matmul(tf_train_dataset, weights1) + biases1)
    logits = tf.matmul(lay1_train, weights2) + biases2
    loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=tf_train_labels))

    optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)

1 个答案:

答案 0 :(得分:0)

您的代码似乎正确无误。请再检查一次。 通过运行以下代码验证它:

num_hidden_nodes=1024
batch_size = 1000
learning_rate = 0.5
image_size = 28
num_labels = 10


tf_train_dataset = tf.placeholder(tf.float32, shape=(batch_size, image_size*image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
# tf.valid_dataset = tf.constant(valid_dataset) 
# tf.test_dataset = tf.constant(test_dataset)

weights1 = tf.Variable(tf.truncated_normal([image_size * image_size, num_hidden_nodes]))
biases1 = tf.Variable(tf.zeros([num_hidden_nodes]))
weights2 = tf.Variable(tf.truncated_normal([num_hidden_nodes, num_labels]))
biases2 = tf.Variable(tf.zeros([num_labels]))
weights = [weights1, biases1, weights2, biases2]

lay1_train = tf.nn.relu(tf.matmul(tf_train_dataset, weights1) + biases1)
logits = tf.matmul(lay1_train, weights2) + biases2
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=tf_train_labels))

optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
sess.run(tf.initialize_all_variables())
input_data = np.random.randn(batch_size, 784)
input_labels = [np.random.randint(0,10) for _ in xrange(batch_size)]

import sklearn.preprocessing
label_binarizer = sklearn.preprocessing.LabelBinarizer()
transformed_labels = label_binarizer.fit_transform(input_labels)



sess.run(optimizer,feed_dict={tf_train_dataset:input_data, tf_train_labels:transformed_labels})
相关问题