Logits和标签必须大小相同

时间:2017-10-22 06:52:00

标签: python machine-learning tensorflow neural-network

我正在尝试创建一个神经网络,一次一个地从多个csv文件中输入13个特征作为输入,并在每次迭代后测量精度。这是我的代码片段:

import tensorflow as tf
import numpy as np
from tensorflow.contrib.layers import fully_connected
import os
import pandas as pd 
n_inputs = 13
n_hidden1 = 30
n_hidden2 = 10
n_outputs = 2
learning_rate = 0.01
n_epochs = 40
batch_size = 1

patient_id = os.listdir('./subset_numerical')
output = pd.read_csv('output.csv')
sepsis_pat = output['output'].tolist()

X = tf.placeholder(tf.float32, shape=[None, n_inputs], name="X")
y = tf.placeholder(tf.int64, shape=[None], name="y")

def data_processor(n):
    id = pd.read_csv('./subset_numerical/'+patient_id[n])
    id_input = np.array([id['VALUE'].tolist()])
    for s in sepsis_pat:
        if str(s) == str(patient_id[n].split('.')[0]):
            a = 1
    try:
        if a == 1:
            a = 0
            return [id_input, np.array([1])]
    except:
        return [id_input, np.array([0])]

def test_set():
    id_combined = []
    out = []
    for p in range(300, len(patient_id)):
        try:
            id1 = pd.read_csv('./subset_numerical/' + patient_id[p])
            id_input1 = np.array(id1['VALUE'].tolist())
            id_combined.append(id_input1)
            for s in sepsis_pat:
                if str(s) == str(patient_id[p].split('.')[0]):
                    a = 1
            try:
                if a == 1:
                    a = 0
                    out.append([1, 0])
            except:
                out.append([0, 1])
        except:
            pass
    return [np.array(id_combined), np.array(out)]

# Declaration of hidden layers and calculation of loss goes here 
# Construction phase begins
with tf.name_scope("dnn"):
    hidden1 = fully_connected(X, n_hidden1, scope="hidden1")
    hidden2 = fully_connected(hidden1, n_hidden2, scope="hidden2")
    logits = fully_connected(hidden2, n_outputs, scope="outputs", activation_fn=None) # We will apply softmax here later

# Calculating loss
with tf.name_scope("loss"):
    xentropy = tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=logits)
    loss = tf.reduce_mean(xentropy, name="loss")

# Training with gradient descent optimizer
with tf.name_scope("train"):
    optimizer = tf.train.GradientDescentOptimizer(learning_rate)
    training_op = optimizer.minimize(loss)
# Measuring accuracy
with tf.name_scope("eval"):
    correct = tf.nn.in_top_k(logits, y, 1)
    accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
    accuracy_summary = tf.summary.scalar('accuracy', accuracy)

# Variable initialization and saving model goes here 
# Construction is finished. Let's get this to work.
with tf.Session() as sess:
    init.run()
    for epoch in range(n_epochs):
        a = 0
        for iteration in range(300 // batch_size):
                X_batch, y_batch = data_processor(iteration)
                sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
                acc_train = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
                X_test, y_test = test_set()
                acc_test = accuracy.eval(feed_dict={X: X_test, y: y_test})
                print(epoch, "Train accuracy:", acc_train, "Test accuracy:", acc_test)
        save_path = saver.save(sess, "./my_model_final.ckpt")

但我坚持这个错误:

logits and labels must be same size: logits_size=[1,2] labels_size=[1,1]

错误似乎发生在这一行:

correct = tf.nn.in_top_k(logits, y, 1)

我做错了什么?

2 个答案:

答案 0 :(得分:0)

根据您提供的错误日志,问题出在代码的这一行:

xentropy = tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=logits)

确保它们具有相同的形状和dtype。 形状应为[batch_size, num_classes]格式,dtype格式应为float16float32float64。查看softmax_cross_entropy_with_logits的文档以获取更多详细信息。

答案 1 :(得分:0)

由于您已定义species_melt_1species_melt_2的形状为n_outputs = 2logits表示批量大小),而形状为[?, 2]只是?。为了应用softmax损失函数,最后一个FC层应该返回一个平坦张量,可以与y进行比较。

解决方案:设置[?]

相关问题