类从另一个类调用函数

时间:2016-07-21 23:51:37

标签: python class numpy

我试图更好地理解Python和OOP中的类。我使用的代码来自:{{3}}

玩,教我自己如何前馈神经网络和类。我正在解构一切并观察它的行为。我想补充的一个选项是能够为神经元选择激活函数的类型。我已经构建了两个类(一个用于sigmoid而另一个用于tanh),其中每个类都包含activation_fnprime类方法,因此:

#### Libraries ####
# Third Party Libraries
import numpy as np

class Sigmoid(object):
    @staticmethod
    def activation_fn(z):
        """The sigmoid function.
        """
        return 1.0/(1.0+np.exp(-z))

    def prime(self, z):
        """The derivative of the sigmoid function.
        """
        return self.activation_fn(z)*(1-self.activation_fn(z))

class Tanh(object):
    @staticmethod
    def activation_fn(z):
        """The tanh function.
        """
        return np.tanh(z)

    def prime(self, z):
        """The derivative of the tahn function.
        """
        return (1-self.activation_fn(z)**2)

当我测试两个类时,他们输出所需的解决方案,

x = Sigmoid()
print(x.activation_fn(1)) # outputs: 0.73105857863
print(x.prime(1))         # outputs: 0.196611933241

y = Tanh()
print(y.activation_fn(1)) # outputs: 0.761594155956
print(y.prime(1))         # outputs: 0.419974341614

当我尝试使用以下代码时:

#### Libraries ####
# Third Party Libraries
import numpy as np

class Sigmoid(object):
    def activation_fn(self, z):
        """The sigmoid function."""
        return 1.0/(1.0+np.exp(-z))

    def prime(self, z):
        """The derivative of the sigmoid function."""
        return self.activation_fn(z)*(1-self.activation_fn(z))

class Tanh(object):
    def activation_fn(self, z):
        """The tanh function."""
        return np.tanh(z)

    def prime(self, z):
        """The derivative of the tahn function."""
        return (1-self.activation_fn(z)**2)

class Network(object):
    """Builds the initial network and sets the weights and biases
    randomly by every connection and neuron, respectively. One can
    choose between the sigmoid and tanh as an activation.
    """
    def __init__(self, sizes, neurons=Sigmoid):
        self.layer_num = len(sizes)
        self.sizes = sizes
        self.biases = [np.zeros((y, 1)) for y in self.sizes[1:]]
        self.weights = [np.zeros((y, x))
                        for x, y in zip(self.sizes[:-1], self.sizes[1:])]
        self.neurons = neurons

    def feedforward(self, a):
        for b, w in zip(self.biases, self.weights):
            a = self.neurons.activation_fn(np.dot(w, a)+b)

        return a

if  __name__ == '__main__':
    # Network with 3 nodes in the first layer, 4 in the second, and 2 in
    # the final layer
    test = Network([3, 4, 2])

    print(test.feedforward(1))

它还输出结果,即

[[ 0.5  0.5  0.5]
 [ 0.5  0.5  0.5]]

我遇到的问题是,当我尝试以与prime类似的方式拨打def feedforward(self, a)时,我只是将self.neuron.activation_fn(np.dot(w, a)+b)更改为self.neuron.prime(np.dot(w, a)+b) / p>

def feedforward(self,a):
    ...

def prime_test(self, a):
    for b, w in zip(self.biases, self.weights):
        a = self.neurons.prime(np.dot(w, a)+b)

    return a

但执行以下代码时

if  __name__ == '__main__':
    # Network with 3 nodes in the first layer, 4 in the second, and 2 in
    # the final layer
    test = Network([3, 4, 2])

    print(test.prime_test(1))

对于我测试这两个类的方式,我发现了以下错误:

prime_test
    a = self.neurons.prime(np.dot(w, a)+b)
TypeError: prime() missing 1 required positional argument: 'z'

我不确定如何补救它,特别是因为它看起来几乎与x.prime(1)相同。

0 个答案:

没有答案
相关问题