计算日志似然

时间:2018-08-08 03:08:19

标签: machine-learning deep-learning log-likelihood

我正在PyTorch框架中构建一个深度学习模型,在该模型中,我希望在最小验证错误点计算出训练集的对数似然率。最初,我是在最小验证错误时针对训练集中的每个示例应用softmax之后,计算出所有具有最大值的概率的总和,但这看起来并不正确。因此,如果有人可以帮助我提供一些答案,那就太好了。这里我的model是2层MLP,它返回最后一层的输出(torch.nn.functional.log_softmax())。

def train(model, train_loader, optimizer, epoch):

    total_loss= 0.0
    running_loss= 0.0
    total_train_loss= 0.0
    likelihood= 0
    #scheduler.step()
    #model.train()

    for i, (data, target) in enumerate(train_loader):

        data= torch.tensor(data, requires_grad= False)

        if cuda:
            data= data.cuda()
            target= target.cuda()

        #Initialize the gradient buffers for all parameters to zero
        optimizer.zero_grad()

        #Forward + backward + optimize
        out= model(data)
        prob,pred= torch.max(out.data,1)
        likelihood += prob.sum()
        #loss= F.cross_entropy(out, target)
        loss= F.nll_loss(out,target)
        loss.backward()

        optimizer.step()

        running_loss += loss.data.item()
        total_loss+= loss.item()

        #Print average running loss for every 500th batch

        if  i % 500 == 0:
            print("Epoch {}, train loss: {:.3f},".format(epoch+1, running_loss/500))
            running_loss= 0.0
    return likelihood,total_loss

`

0 个答案:

没有答案