ReLU的衍生物

时间:2018-09-23 09:46:43

标签: pytorch

我正在学习PyTorch。这是官方教程中的第一个示例。如下面的方框所示,我有两个问题,

a)我知道ReLU函数的导数在x <0时为0,在x> 0时为1。但是代码似乎保持x> 0部分不变,并将x <0部分设置为0。为什么?

b)为什么要转置,即x.T.mm(grad_h)?我似乎不需要移调。我很困惑。谢谢,

# -*- coding: utf-8 -*-

import torch


dtype = torch.float
device = torch.device("cpu")
# device = torch.device("cuda:0") # Uncomment this to run on GPU

# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
N, D_in, H, D_out = 64, 1000, 100, 10

# Create random input and output data
x = torch.randn(N, D_in, device=device, dtype=dtype)
y = torch.randn(N, D_out, device=device, dtype=dtype)

# Randomly initialize weights
w1 = torch.randn(D_in, H, device=device, dtype=dtype)
w2 = torch.randn(H, D_out, device=device, dtype=dtype)

learning_rate = 1e-6
for t in range(500):
    # Forward pass: compute predicted y
    h = x.mm(w1)
    h_relu = h.clamp(min=0)
    y_pred = h_relu.mm(w2)

    # Compute and print loss
    loss = (y_pred - y).pow(2).sum().item()
    print(t, loss)

    # Backprop to compute gradients of w1 and w2 with respect to loss
    grad_y_pred = 2.0 * (y_pred - y)
    grad_w2 = h_relu.t().mm(grad_y_pred)
    grad_h_relu = grad_y_pred.mm(w2.t())
    grad_h = grad_h_relu.clone()
    grad_h[h < 0] = 0
    grad_w1 = x.t().mm(grad_h)
    # Update weights using gradient descent
    w1 -= learning_rate * grad_w1
    w2 -= learning_rate * grad_w2

1 个答案:

答案 0 :(得分:2)

1-的确,当x <0时,ReLU函数的导数为0,而当x> 0时,导数为1。但是请注意,梯度从函数的输出一直流回到h。当您重新计算grad_h时,其计算公式为:

grad_h = derivative of ReLu(x) * incoming gradient

正如您所说的那样,ReLu函数的导数为1,因此grad_h等于输入梯度。

2- x矩阵的大小是64x1000,而grad_h矩阵是64x100。显然,您不能直接将x与grad_h相乘,而需要对x进行转置以获得合适的尺寸。