用于函数逼近的Theano神经网络

时间:2018-03-25 14:19:17

标签: python neural-network theano

我是theano的新手,作为一个数学家而不是高级程序员的人,我在这件事上有点迷失。 我正在为一个函数逼近做一个神经网络(例如,sin(x))。 事实是我的计划不起作用。问题出在成本函数的某个方面。我的意思不是数学,而是尺寸。我认为矩阵有一些东西。我试图从sin(x)和相同数量的函数值(sin(x)值)得到30个三分点。

我非常感谢任何帮助:)

class Neural_Net(object):
    def __init__(self,TrainData,TargerVector,TestData,LearningRate):
        # network parameters
        random_seed = 42
        hidden_layer_size = 5        

        PreparedTrainData=numpy.ones((len(TrainData),2))            #second column is for bias weights
        PreparedTrainData[:,0]=TrainData           

        n_features=2                                                #First feature is a variable, second is a bias

        # random number generator
        rng = numpy.random.RandomState(random_seed) 

        # setting up variables for the network
        input_vector = theano.tensor.fmatrix(name='input_vector')   #strange name but it is matrix :) 
        target_value = theano.tensor.fvector(name='target_value')

        input_vector_data=numpy.asarray(PreparedTrainData)
        target_value_data=numpy.asarray(TargerVector)
        #input_vector_test_data=numpy.asarray(TestData)

        # input->hidden weights
        W_hidden_vals = numpy.asarray(rng.normal(loc=0.0, scale=0.1, size=(n_features, hidden_layer_size)), dtype=floatX)
        W_hidden = theano.shared(W_hidden_vals, 'W_hidden')

        # calculating the hidden layer
        hidden = theano.tensor.dot(input_vector, W_hidden)
        hidden = theano.tensor.nnet.sigmoid(hidden)

        # hidden->output weights
        W_output_vals = numpy.asarray(rng.normal(loc=0.0, scale=0.1, size=(hidden_layer_size, 1)), dtype=floatX)
        W_output = theano.shared(W_output_vals, 'W_output')


        # calculating the predicted value (output)
        predicted_value = theano.tensor.dot(hidden, W_output)
        predicted_value = theano.tensor.nnet.sigmoid(predicted_value)

        # calculating the cost function
        cost = theano.tensor.sum((predicted_value - target_value)**2)
        #cost += l2_regularisation * (theano.tensor.sqr(W_hidden).sum() + theano.tensor.sqr(W_output).sum())   

        # calculating gradient descent updates based on the cost function
        updates = [(W_hidden, W_hidden - LearningRate * theano.tensor.grad(cost, W_hidden)), (W_output, W_output - LearningRate * theano.tensor.grad(cost, W_output))]

        # defining Theano functions for training and testing the network
        self.train = theano.function(inputs=[input_vector,target_value], outputs=[cost], updates=updates,allow_input_downcast=True)
        self.test = theano.function(inputs=[input_vector], outputs=[predicted_value],allow_input_downcast=True)

        for i in range(50):
            self.train(input_vector_data,target_value_data)

        #print(self.test())

这是我得到的错误。

self.fn() if output_subset is None else\

ValueError: Input dimension mis-match. (input[0].shape[1] = 1, input[1].shape[1] = 20)
Apply node that caused the error: Elemwise{sub,no_inplace}(Elemwise{ScalarSigmoid}[(0, 0)].0, InplaceDimShuffle{x,0}.0)
Toposort index: 8
Inputs types: [TensorType(float64, matrix), TensorType(float32, row)]
Inputs shapes: [(20, 1), (1, 20)]
Inputs strides: [(8, 8), (80, 4)]
Inputs values: ['not shown', 'not shown']
Outputs clients: [[Elemwise{Sqr}[(0, 0)](Elemwise{sub,no_inplace}.0), Elemwise{Composite{(i0 * i1 * i2 * (i3 - i2))}}[(0, 2)](TensorConstant{(1, 1) of 2.0}, Elemwise{sub,no_inplace}.0, Elemwise{ScalarSigmoid}[(0, 0)].0, TensorConstant{(1, 1) of 1.0})]]

Backtrace when the node is created(use Theano flag traceback.limit=N to make it longer):
  File "/Users/mark/anaconda2/lib/python2.7/site-packages/IPython/core/interactiveshell.py", line 2718, in run_cell
    interactivity=interactivity, compiler=compiler, result=result)
  File "/Users/mark/anaconda2/lib/python2.7/site-packages/IPython/core/interactiveshell.py", line 2828, in run_ast_nodes
    if self.run_code(code, result):
  File "/Users/mark/anaconda2/lib/python2.7/site-packages/IPython/core/interactiveshell.py", line 2882, in run_code
    exec(code_obj, self.user_global_ns, self.user_ns)
  File "<ipython-input-152-177242de509b>", line 1, in <module>
    runfile('/Users/mark/Documents/Python_Projects/Neural_Networks/toy_example.py', wdir='/Users/mark/Documents/Python_Projects/Neural_Networks')
  File "/Users/mark/anaconda2/lib/python2.7/site-packages/spyder/utils/site/sitecustomize.py", line 705, in runfile
    execfile(filename, namespace)
  File "/Users/mark/anaconda2/lib/python2.7/site-packages/spyder/utils/site/sitecustomize.py", line 94, in execfile
    builtins.execfile(filename, *where)
  File "/Users/mark/Documents/Python_Projects/Neural_Networks/toy_example.py", line 22, in <module>
    Classifier = Cls.Neural_Net(Initial_points,Function_values,Predict_values,learningrate)
  File "Classes_toy_ex.py", line 62, in __init__
    cost = theano.tensor.sqr(predicted_value - target_value).sum

HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint and storage map footprint of this apply node.

0 个答案:

没有答案
相关问题