CNN的Tensor流中占位符丢失错误

时间:2016-04-26 00:21:43

标签: python tensorflow deep-learning torch mnist

我正在使用张量流在MNIST数据库上运行卷积神经网络。但我收到以下错误。

tensorflow.python.framework.errors.InvalidArgumentError:您必须使用dtype float为占位符张量'x'提供值      [[Node:x = Placeholderdtype = DT_FLOAT,shape = [],_ device =“/ job:localhost / replica:0 / task:0 / cpu:0”]]

x = tf.placeholder(tf.float32,[None,784],name ='x')#mnist形状的数据图像28 * 28 = 784

我以为我正确地使用feed_dict更新x的值,但是它说我没有更新占位符x的值。

此外,我的代码中还有其他逻辑漏洞吗?

非常感谢任何帮助。感谢。

import tensorflow as tf
import numpy
from tensorflow.examples.tutorials.mnist import input_data

def conv2d(x, W):
  return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')

def max_pool_2x2(x):
  return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],
                        strides=[1, 2, 2, 1], padding='SAME')

def weight_variable(shape):
  initial = tf.truncated_normal(shape, stddev=0.1)
  return tf.Variable(initial)

def bias_variable(shape):
  initial = tf.constant(0.1, shape=shape)
  return tf.Variable(initial)


mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)

# Parameters
learning_rate = 0.01
training_epochs = 10
batch_size = 100
display_step = 1

# tf Graph Input
#x = tf.placeholder(tf.float32, [50, 784], name='x') # mnist data image of shape 28*28=784
#y = tf.placeholder(tf.float32, [50, 10], name='y') # 0-9 digits recognition => 10 classes

# Set model weights
W = tf.Variable(tf.zeros([784, 10]), name="weights")
b = tf.Variable(tf.zeros([10]), name="bias")

W_conv1 = weight_variable([5, 5, 1, 32])
b_conv1 = bias_variable([32])


W_conv2 = weight_variable([5, 5, 32, 64])
b_conv2 = bias_variable([64])


W_fc1 = weight_variable([7 * 7 * 64, 1024])
b_fc1 = bias_variable([1024])

W_fc2 = weight_variable([1024, 10])
b_fc2 = bias_variable([10])

# Initializing the variables
init = tf.initialize_all_variables()

with tf.Session() as sess:
    sess.run(init)


    # Training cycle
    for i in range(1000):
        print i
        batch_xs, batch_ys = mnist.train.next_batch(50)

        x_image = tf.reshape(x, [-1,28,28,1])

        h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
        h_pool1 = max_pool_2x2(h_conv1)

        h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
        h_pool2 = max_pool_2x2(h_conv2)

        h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])
        h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)


        y_conv=tf.nn.softmax(tf.matmul(h_fc1, W_fc2) + b_fc2)

        cross_entropy = tf.reduce_mean(-tf.reduce_sum(y * tf.log(y_conv), reduction_indices=[1]))
        sess.run(
          [cross_entropy, y_conv],
          feed_dict={x: batch_xs, y: batch_ys})

        correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y,1))
        print correct_prediction.eval()
        accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

3 个答案:

答案 0 :(得分:2)

为什么要尝试创建占位符变量?您应该能够直接使用由<ul> <li><img src="http://images.clipartpanda.com/smiley-face-clip-art-niXoRMbiB.png"></li> <li>Second item in list</li> <li>Third item in list</li> </ul> 生成的输出,前提是您可以在模型内部移动mnist.train.next_batch(50)的计算和精度。

correct_prediction

答案 1 :(得分:2)

您因为尝试在import org.apache.avro.Schema; import org.apache.avro.generic.GenericDatumReader; import org.apache.avro.generic.GenericRecord; import org.apache.avro.io.DecoderFactory; import org.apache.avro.io.EncoderFactory; import org.apache.avro.reflect.ReflectData; import org.apache.avro.reflect.ReflectDatumWriter; import org.apache.kafka.clients.consumer.ConsumerRecord; import org.apache.kafka.clients.consumer.ConsumerRecords; import org.apache.kafka.clients.consumer.KafkaConsumer; import org.apache.kafka.clients.producer.KafkaProducer; import org.apache.kafka.clients.producer.ProducerConfig; import org.apache.kafka.clients.producer.ProducerRecord; import org.apache.logging.log4j.LogManager; import org.apache.logging.log4j.Logger; import org.joda.time.DateTime; import org.junit.Test; import java.io.ByteArrayOutputStream; import java.util.Collections; import java.util.Properties; /** * This is a test... */ public class KafkaAvroProducerTest { private static final Logger log = LogManager.getLogger(KafkaAvroProducerTest.class); @Test public void produceAndSendAndEvent() throws Exception { Properties props = new Properties(); props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, org.apache.kafka.common.serialization.StringSerializer.class); props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, io.confluent.kafka.serializers.KafkaAvroSerializer.class); props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092"); props.put("schema.registry.url", "http://localhost:8081"); KafkaProducer producer = new KafkaProducer(props); log.debug("starting producer"); String topic = "topic11"; Schema schema = ReflectData.get().getSchema(Purchase.class); Purchase purchase = new Purchase("appStore", 9.99d, DateTime.now().getMillis(), "BRXh2lf9wm"); ReflectDatumWriter<Purchase> reflectDatumWriter = new ReflectDatumWriter<>(schema); GenericDatumReader<Object> genericRecordReader = new GenericDatumReader<>(schema); ByteArrayOutputStream bytes = new ByteArrayOutputStream(); reflectDatumWriter.write(purchase, EncoderFactory.get().directBinaryEncoder(bytes, null)); GenericRecord avroRecord = (GenericRecord) genericRecordReader.read(null, DecoderFactory.get().binaryDecoder(bytes.toByteArray(), null)); ProducerRecord record = new ProducerRecord<Object, Object>(topic, avroRecord); Thread producerThread = new Thread(() -> { try { while(true) { log.debug("send a message {}", record); producer.send(record); Thread.sleep(2000); } }catch(Exception ex) { log.error("error", ex); } }); producerThread.start(); props = new Properties(); props.put("bootstrap.servers", "localhost:9092"); props.put("group.id", "testGroup"); props.put("auto.commit.enable", "false"); props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); props.put("value.deserializer", "io.confluent.kafka.serializers.KafkaAvroDeserializer"); props.put("schema.registry.url", "http://localhost:8081"); org.apache.kafka.clients.consumer.KafkaConsumer<String, GenericRecord> kafkaConsumer = new KafkaConsumer(props); kafkaConsumer.subscribe(Collections.singletonList(topic)); Thread consumerThread = new Thread(() -> { try { while(true) { try { ConsumerRecords<String, GenericRecord> records = kafkaConsumer.poll(1000); for (ConsumerRecord<String, GenericRecord> record1 : records) {// log.debug("read - {}", record1.value().getClass()); } }catch(Exception ex) { log.error("error", ex); } } }catch(Exception ex) { log.error("error", ex); } }); consumerThread.start(); System.in.read(); } } 上运行eval()而收到该错误。张量需要批量输入(x和y)才能进行评估。您可以通过将其更改为:

来更正错误
correct_prediction

但正如Benoit Steiner所说,你可以轻松地将它拉进模型中。

更一般地说,你在这里没有进行任何优化,但也许你还没有完成任务。就目前而言,它只打印出一段时间的错误预测。 :)

答案 2 :(得分:0)

首先,您的x和y被注释掉,如果您的实际代码中存在此问题,则很可能是问题。

CELERYBEAT_SCHEDULE = { 'daily-mailer': { 'task': 'tasks.views.mail_automated', 'schedule': crontab(day='*'), 'args': (), } { 相当于correct_prediction.eval()(或在您的情况下为tf.session.run(correct_prediction)),因此需要相同的语法*。因此,为了运行它需要sess.run(),但是请注意,这通常是RAM密集型的,并且可能导致系统挂起。由于使用ram,将精度函数拉入模型可能是一个好主意。

我没有看到使用你的交叉熵的优化功能,但我从来没有尝试过不使用它,所以如果它有效,就不要修复它。但如果它最终导致错误你可能想尝试:

correct_prediction.eval(feed_dict={x: batch_xs, y: batch_ys})

并替换&#39; optimizer = optimizer = tf.train.AdamOptimizer().minimize(cross_entropy) &#39;在

cross_entropy

使用&#39; sess.run([cross_entropy, y_conv],feed_dict={x: batch_xs, y: batch_ys}) &#39;

https://pythonprogramming.net/tensorflow-neural-network-session-machine-learning-tutorial/

检查脚本的准确度评估部分。