用于二元经典化的张量流神经网络;我如何使用占位符

时间:2017-05-23 07:29:52

标签: machine-learning tensorflow neural-network

这是我的代码:

我的目标是一个形状为(N,)的向量,它是一个只有二进制数

的向量

但是,我正在编译错误

/Library/Frameworks/Python.framework/Versions/3.6/bin/python3.6 /Users/Lai/Dropbox/PersonalProject/MachineLearningForSports/models/NeuralNetwork.py
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/sklearn/cross_validation.py:44: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20.
  "This module will be removed in 0.20.", DeprecationWarning)
Traceback (most recent call last):
  File "/Users/Lai/Dropbox/PersonalProject/MachineLearningForSports/models/NeuralNetwork.py", line 102, in <module>
    _, c = sess.run([optimizer,cost],feed_dict = {x:batch_x,y:batch_y})
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 766, in run
    run_metadata_ptr)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 943, in _run
    % (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape())))
ValueError: Cannot feed value of shape (100,) for Tensor 'Placeholder_1:0', which has shape '(?, 2)'

因为我的批量大小是100;我相信错误是在将我的目标与我的预测进行比较时。虽然我很确定,但是tf.placable似乎用N * 2进行预测。任何帮助?感谢

import tensorflow as tf
import DataPrepare as dp
import numpy as np


def random_init(x,num_feature_1st,num_feature_2nd,num_class):
    W1 =  tf.Variable(tf.random_normal([num_feature_1st,num_feature_2nd]))
    bias1 = tf.Variable(tf.random_normal([num_feature_2nd]))
    W2 = tf.Variable(tf.random_normal([num_feature_2nd,num_class]))
    bias2 = tf.Variable(tf.random_normal([num_class]))

    return [W1,bias1,W2,bias2]




def softsign(z):
    """The softsign function, applied elementwise."""
    return z / (1. + np.abs(z))




def multilayer_perceptron(x,num_feature_1st,num_feature_2nd,num_class):
    params = random_init(x,num_feature_1st,num_feature_2nd,num_class)
    layer_1 = tf.add(tf.matmul(x,params[0]),params[1])
    layer_1 = softsign(layer_1)
    #layer_1 = tf.nn.relu(layer_1)
    layer_2 = tf.add(tf.matmul(layer_1,params[2]),params[3])
    #output = tf.nn.softmax(layer_2)
    output = tf.nn.sigmoid(layer_2)

    return output





def next_batch(num, dataX,dataY):
    idx = np.arange(0,len(dataX))
    np.random.shuffle(idx)
    idx = idx[0:num]
    dataX_shuffle = [dataX[i] for i in idx]
    dataY_shuffle = [dataY[i] for i in idx]
    dataX_shuffle = np.asarray(dataX_shuffle)
    dataY_shuffle = np.asarray(dataY_shuffle)
    return dataX_shuffle, dataY_shuffle




if __name__ == "__main__":
    #sess = tf.InteractiveSession()

    learning_rate = 0.001
    training_epochs = 10
    batch_size = 100
    display_step = 1
    num_feature_1st = 6
    num_feature_2nd = 500
    num_class = 2

    x = tf.placeholder('float', [None, 6])
    y = tf.placeholder('float',[None,2])


    data = dp.dataPrepare(dp.datas,dp.path)
    trainX = data[0]
    testX = data[1]   # a matrix
    trainY = data[2] # a vector with binary number
    testY = data[3]
    params = random_init(x,num_feature_1st,num_feature_2nd,num_class)

    # construct model
    pred = multilayer_perceptron(x, num_feature_1st, num_feature_2nd, num_class)

    cost = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(pred, y))
    optimizer = tf.train.AdamOptimizer().minimize(cost)





    init = tf.global_variables_initializer()



    with tf.Session() as sess:
        sess.run(init)




        #train
        for epoch in range(training_epochs):
            avg_cost = 0
            total_batch = int(len(trainX[:,0])/batch_size)

            for i in range(total_batch):
                batch_x, batch_y = next_batch(batch_size,trainX,trainY)

                _, c = sess.run([optimizer,cost],feed_dict = {x:batch_x,y:batch_y})

                avg_cost += c/total_batch

            if epoch % display_step ==0:
                print("Epoch: ", "%04d" % (epoch+1), " cost= ", "{:.9f}".format(avg_cost))
        print("Optimization Finished!")

1 个答案:

答案 0 :(得分:0)

每当您从计算图执行动态节点时 - 几乎任何非输入节点 - 都需要指定所有因变量。用这种方式思考:如果你有一个表格的数学函数 exporting: { filename: 'event-id-metadata-graph', buttons: { contextButton: { menuItems: [{ text: 'Download PDF', onclick: function () { this.exportChart({ type: 'application/pdf' }); } }, { text: 'Print', onclick: function () { alert('Launch Print Table function') }, separator: false }] } } }, (例如) 并且您想要评估该函数,您还需要指定y = f(x) = Ax + b。但是,如果您想评估(即读取)x的值,则无需指定x,因为A已知(至少在此上下文中)。

因此,您可以评估(通过将其传递给A网络参数而不指定输入(上例中的tf.Session.run(...))。但是,您无法评估函数的输出没有指定输入(在示例中,您需要指定A)。

至于您的代码,以下行将不起作用: x,因为您要求会话在不指定其输入的情况下评估函数。

相关问题