线性回归梯度

时间:2018-09-10 20:33:21

标签: python machine-learning scipy regression linear-regression

我有一个非常基本的线性回归样本。下面的实现(不进行正则化)

class Learning:

    def assume(self, weights, x):
        return np.dot(x, np.transpose(weights))

    def cost(self, weights, x, y, lam):
        predict = self.assume(weights, x) \
            .reshape(len(x), 1)

        val = np.sum(np.square(predict - y), axis=0)
        assert val is not None

        assert val.shape == (1,)
        return val[0] / 2 * len(x)

    def grad(self, weights, x, y, lam):
        predict = self.assume(weights, x)\
            .reshape(len(x), 1)

        val = np.sum(np.multiply(
            x, (predict - y)), axis=0)
        assert val is not None

        assert val.shape == weights.shape
        return val / len(x)

我想用scipy.optimize检查渐变是否有效。

learn = Learning()
INPUTS = np.array([[1, 2],
          [1, 3],
          [1, 6]])
OUTPUTS = np.array([[3], [5], [11]])
WEIGHTS = np.array([1, 1])

t_check_grad = scipy.optimize.check_grad(
    learn.cost, learn.grad, WEIGHTS,INPUTS, OUTPUTS, 0)
print(t_check_grad)
# Output will be 73.2241602235811!!!

我从头到尾手动检查了所有计算。这实际上是正确的实现。但是在输出中,我看到了很大的不同!是什么原因?

1 个答案:

答案 0 :(得分:2)

您应该在费用函数中返回

val[0] / (2 * len(x))

而不是val[0] / 2 * len(x)。那你将有

print(t_check_grad)
# 1.20853633278e-07
相关问题