Relu表现比sigmoid更差?

时间:2017-06-04 06:34:29

标签: python machine-learning deep-learning

我在所有图层和输出上使用sigmoid并获得0.00012的最终错误率但是当我使用理论上更好的Relu时,我得到了最糟糕的结果。任何人都能解释为什么会发生这种情况?我在100个网站上使用了一个非常简单的2层实现代码,但仍在下面提供,

import numpy as np
#test
#avg(nonlin(np.dot(nonlin(np.dot([0,0,1],syn0)),syn1)))
#returns list >> [predicted_output, confidence]
def nonlin(x,deriv=False):#Sigmoid
    if(deriv==True):
        return x*(1-x)

    return 1/(1+np.exp(-x))

def relu(x, deriv=False):#RELU
    if (deriv == True):
        for i in range(0, len(x)):
            for k in range(len(x[i])):
                if x[i][k] > 0:
                    x[i][k] = 1
                else:
                    x[i][k] = 0
        return x
    for i in range(0, len(x)):
        for k in range(0, len(x[i])):
            if x[i][k] > 0:
                pass  # do nothing since it would be effectively replacing x with x
            else:
                x[i][k] = 0
    return x

X = np.array([[0,0,1],
            [0,0,0],  
            [0,1,1],
            [1,0,1],
            [1,0,0],
            [0,1,0]])

y = np.array([[0],[1],[0],[0],[1],[1]])

np.random.seed(1)

# randomly initialize our weights with mean 0
syn0 = 2*np.random.random((3,4)) - 1
syn1 = 2*np.random.random((4,1)) - 1

def avg(i):
        if i > 0.5:
            confidence = i
            return [1,float(confidence)]
        else:
            confidence=1.0-float(i)
            return [0,confidence]
for j in xrange(500000):

    # Feed forward through layers 0, 1, and 2
    l0 = X
    l1 = nonlin(np.dot(l0,syn0Performing))
    l2 = nonlin(np.dot(l1,syn1))
    #print 'this is',l2,'\n'
    # how much did we miss the target value?
    l2_error = y - l2
    #print l2_error,'\n'
    if (j% 100000) == 0:
        print "Error:" + str(np.mean(np.abs(l2_error)))
        print syn1

    # in what direction is the target value?
    # were we really sure? if so, don't change too much.
    l2_delta = l2_error*nonlin(l2,deriv=True)

    # how much did each l1 value contribute to the l2 error (according to the weights)?
    l1_error = l2_delta.dot(syn1.T)

    # in what direction is the target l1?
    # were we really sure? if so, don't change too much.
    l1_delta = l1_error * nonlin(l1,deriv=True)

    syn1 += l1.T.dot(l2_delta)
    syn0 += l0.T.dot(l1_delta)
print "Final Error:" + str(np.mean(np.abs(l2_error)))
def p(l):
        return avg(nonlin(np.dot(nonlin(np.dot(l,syn0)),syn1)))

因此p(x)是训练后的预测函数,其中x是输入值的1 x 3矩阵。

1 个答案:

答案 0 :(得分:1)

为什么你说它在理论上更好?在大多数应用中,ReLU已被证明更好,但并不意味着它普遍更好。您的示例非常简单,输入在[0,1]之间缩放,与输出相同。这正是我期望sigmoid表现良好的地方。由于梯度消失和大型网络的其他问题,您在实践中不会遇到隐藏层中的sigmoids,但这对您来说几乎不是问题。

此外,如果您使用ReLU衍生物,您在代码中错过了“其他”。你的衍生物将被简单覆盖。

正如复习一样,这里是ReLU的定义:

  

F(X)= MAX(0,x)的

...意味着它可以将你的激活吹向无限。您希望避免在最后一个(输出)图层上使用ReLU。

在旁注中,只要有可能,您应该利用向量化操作:

step

是的,如果/你正在做的话,那么很多会更快。

相关问题