pytorch 0.4.0广播在优化器中不起作用

时间:2018-06-12 21:24:57

标签: pytorch

我似乎无法在pytorch 0.4.0中使用autograd进行广播!任何帮助赞赏。下面是一个重现我的问题的最小代码示例。我想找到一个值“偏差”,它可以最大限度地减少数据集的损失。理解错误消息,因为它想要将带有5个条目的向量反向传播到标量中,这是无法弄清楚的。然而,这是广播的整个想法。我预期的行为是它会将错误的平均值传播回广播的标量值(这里是偏差)。

请建议。

import numpy as np
import torch
from torch import nn
import torch.nn.functional as F
from torch.utils.data import Dataset

print(torch.__version__)

class AddBias(torch.autograd.Function):
    @staticmethod
    def forward(ctx, input, bias):
        ctx.save_for_backward(input, bias)
        return input - bias
    @staticmethod
    def backward(ctx, grad_out):
        input, bias = ctx.saved_tensors
        grad_in = grad_bias = None
        len_grad = len(ctx.needs_input_grad)
        assert len_grad in {0, 1, 2}
        if ctx.needs_input_grad[0]: grad_in = grad_out
        if len_grad == 2: grad_bias = -1 * grad_out 
        return grad_in, grad_bias

class BiasModel(nn.Module):
    def __init__(self, size):
        super(BiasModel, self).__init__()
        self.bias_model = AddBias.apply
        self.bias = nn.Parameter(torch.tensor(0.5, dtype=torch.float, requires_grad=True))
    def forward(self, arr): return self.bias_model(arr[:], self.bias).unsqueeze(-1)

class MyData(Dataset):
    def __init__(self, data): self.data = data
    def __len__(self): return len(self.data)
    def __getitem__(self, i): 
        arr = torch.tensor(data[i], dtype=torch.float)
        target = torch.tensor(arr > 0.5, dtype=torch.float).unsqueeze(-1)
        return arr, target

m = 5
data = np.random.random((100, m))
model = BiasModel(m)
my_data = MyData(data)

loss_func = F.binary_cross_entropy_with_logits
with torch.no_grad():
    loss = 0.
    for arr, target in my_data: loss += loss_func(model(arr), target)
    print('loss before', loss / len(my_data))

optimizer = torch.optim.SGD(model.parameters(), lr=0.1)

loss_tot = 0.
for arr, target in my_data:
    model.zero_grad()
    loss = loss_func(model(arr), target)
    loss_tot += loss
    loss.backward()
    optimizer.step()

输出:

0.4.0
loss before tensor(0.5735)
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-4-27bce65b553b> in <module>()
     56     loss_tot += loss
     57     loss.backward()
---> 58     optimizer.step()

~/miniconda3/envs/myproject/lib/python3.6/site-packages/torch/optim/sgd.py in step(self, closure)
    105                         d_p = buf
    106 
--> 107                 p.data.add_(-group['lr'], d_p)
    108 
    109         return loss

RuntimeError: expand(torch.FloatTensor{[5]}, size=[]): the number of sizes provided (0) must be greater or equal to the number of dimensions in the tensor (1)

1 个答案:

答案 0 :(得分:1)

我忘了在向后传球做反向广播!

具体来说,不得不改变

if len_grad == 2: grad_bias = -1 * grad_out 

if len_grad == 2: grad_bias = -1 * torch.mean(grad_out) 
相关问题