Pytorch在GPU上的运行速度比CPU慢

时间:2018-11-20 08:47:59

标签: python pytorch

我刚接触python /机器学习,并且一直在尝试学习PyTorch。这是我一直关注的教程: https://pytorch.org/tutorials/intermediate/char_rnn_classification_tutorial.html ,这基本上是一个非常简单的RNN,用于将名称分类为其语言。当我在CPU上运行它时,最终结果是

100000 100% (2m 25s) 0.3983 Tsujimoto / Japanese ✓
Training completed! Time taken: 145.90 seconds.

但是当我在GPU上运行它时,我会感到沮丧

100000 100% (5m 56s) 0.1462 Mcgregor / Scottish ✓
Training completed! Time taken: 356.96 seconds.

我唯一改变的是train函数,它通过将输入张量发送到设备(device = torch.device("cuda:3" if torch.cuda.is_available() else "cpu")),当然也将rnn自身放置到GPU(rnn = RNN(n_letters, n_hidden, n_categories).to(device)中:

def train(category_tensor, line_tensor):
    hidden = rnn.initHidden().to(device)
    category_tensor, line_tensor = category_tensor.to(device), line_tensor.to(device)
    rnn.zero_grad()
    for i in range(line_tensor.size()[0]):
        output, hidden = rnn(line_tensor[i], hidden)
    loss = F.nll_loss(output, category_tensor)
    loss.backward()
    optimizer.step()
    return output, loss.item()

我做错什么了吗?

谢谢!

0 个答案:

没有答案