Pytorch的模型无法前馈DataLoader数据集NotImplementedError

时间:2019-05-01 07:11:57

标签: python machine-learning neural-network computer-vision pytorch

所以我正在尝试使用PyTorch库来训练CNN。该模型没有任何问题(我可以无错误地转发数据),并使用DataLoader函数准备自定义数据集。

这是我的数据准备代码(我省略了一些不相关的变量声明等):

# Initiliaze model  
class neural_net_model(nn.Module):
      # omitted 
      ...

# Prep the dataset
train_data = torchvision.datasets.ImageFolder(root = TRAIN_DATA_PATH, transform = TRANSFORM_IMG)
train_data_loader = data_utils.DataLoader(train_data, batch_size = BATCH_SIZE, shuffle = True)

test_data = torchvision.datasets.ImageFolder(root = TEST_DATA_PATH, transform = TRANSFORM_IMG)
test_data_loader = data_utils.DataLoader(test_data, batch_size = BATCH_SIZE, shuffle = True)

但是,在训练代码(我根据各种在线参考文献遵循的训练代码)中,通过以下指令前馈模型时会出错:

...

for step, (data, label) in enumerate(train_data_loader):
    outputs = neural_net_model(data)
    ...

哪个会引发错误:

NotImplementedError                       Traceback (most recent call last)
<ipython-input-12-690cfa6916ec> in <module>
      6 
      7         # Forward pass
----> 8         outputs = neural_net_model(images)
      9         loss = criterion(outputs, labels)
     10 

~\Anaconda3\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs)
    487             result = self._slow_forward(*input, **kwargs)
    488         else:
--> 489             result = self.forward(*input, **kwargs)
    490         for hook in self._forward_hooks.values():
    491             hook_result = hook(self, input, result)

~\Anaconda3\lib\site-packages\torch\nn\modules\module.py in forward(self, *input)
     83             registered hooks while the latter silently ignores them.
     84         """
---> 85         raise NotImplementedError
     86 
     87     def register_buffer(self, name, tensor):

NotImplementedError: 

我在互联网上找不到类似的问题,这似乎很奇怪,因为我按照参考文献的方式完全遵循了代码,并且错误在文档中的定义不是很好(NotImplementedError:)

你们知道这个问题的原因和解决方案吗?

  • 这是网络的代码
from torch import nn, from_numpy
import torch
import torch.nn.functional as F 

class DeXpression(nn.Module):
    def __init__(self, ):
        super(DeXpression, self).__init__()

        # Layer 1
        self.convolution1 = nn.Conv2d(in_channels = 1, out_channels = 64, kernel_size = 7, stride = 2, padding = 3)
        self.pooling1 = nn.MaxPool2d(kernel_size = 3, stride = 2, padding = 0)

        # Layer FeatEx1
        self.convolution2a = nn.Conv2d(in_channels = 64, out_channels = 96, kernel_size = 1, stride = 1, padding = 0)
        self.convolution2b = nn.Conv2d(in_channels = 96, out_channels = 208, kernel_size = 3, stride = 1, padding = 1)

        self.pooling2a = nn.MaxPool2d(kernel_size = 3, stride = 1, padding = 1)
        self.convolution2c = nn.Conv2d(in_channels = 64, out_channels = 64, kernel_size = 1, stride = 1, padding = 0)

        self.pooling2b = nn.MaxPool2d(kernel_size = 3, stride = 2, padding = 0)

        # Layer FeatEx2
        self.convolution3a = nn.Conv2d(in_channels = 272, out_channels = 96, kernel_size = 1, stride = 1, padding = 0)
        self.convolution3b = nn.Conv2d(in_channels = 96, out_channels = 208, kernel_size = 3, stride = 1, padding = 1)

        self.pooling3a = nn.MaxPool2d(kernel_size = 3, stride = 1, padding = 1)
        self.convolution3c = nn.Conv2d(in_channels = 272, out_channels = 64, kernel_size = 1, stride = 1, padding = 0)

        self.pooling3b = nn.MaxPool2d(kernel_size = 3, stride = 2, padding = 0)

        # Fully-connected Layer
        self.fc1 = nn.Linear(45968, 1024)
        self.fc2 = nn.Linear(1024, 64)
        self.fc3 = nn.Linear(64, 8)

    def net_forward(self, x):
        # Layer 1
        x = F.relu(self.convolution1(x))
        x = F.local_response_norm(self.pooling1(x), size = 2)
        y1 = x
        y2 = x
        # Layer FeatEx1
        y1 = F.relu(self.convolution2a(y1))
        y1 = F.relu(self.convolution2b(y1))

        y2 = self.pooling2a(y2)
        y2 = F.relu(self.convolution2c(y2))

        x = torch.zeros([y1.shape[0], y1.shape[1] + y2.shape[1], y1.shape[2], y1.shape[3]])
        x[:, 0:y1.shape[1], :, :] = y1
        x[:,  y1.shape[1]:, :, :] = y2

        x = self.pooling2b(x)
        y1 = x
        y2 = x
        # Layer FeatEx2
        y1 = F.relu(self.convolution3a(y1))
        y1 = F.relu(self.convolution3b(y1))

        y2 = self.pooling3a(y2)
        y2 = F.relu(self.convolution3c(y2))

        x = torch.zeros([y1.shape[0], y1.shape[1] + y2.shape[1], y1.shape[2], y1.shape[3]])
        x[:, 0:y1.shape[1], :, :] = y1
        x[:,  y1.shape[1]:, :, :] = y2

        x = self.pooling3b(x)
        # Fully-connected layer
        x = x.view(-1, x.shape[0] * x.shape[1] * x.shape[2] * x.shape[3])
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = F.log_softmax(self.fc3(x), dim = None)

        return x 

1 个答案:

答案 0 :(得分:0)

您的网络类实现了net_forward方法。但是,nn.Module期望其派生类实现forward方法(不带前缀net_)。
只需将net_forward重命名为forward,您的代码就可以了。

您可以了解有关继承和重载方法here的更多信息。


旧答案:
<罢工> 您正在运行的代码与您发布的代码不同。
您发布了代码:

for step, (data, label) in enumerate(train_data_loader):
    neural_net_model(data)

您运行的代码(显示在所显示的错误消息中)是:

# Forward pass
outputs = model(images)

您收到的错误表明您向model馈送的images属于nn.Module类,并且不是实际实现来自nn.Module 。因此,您尝试使用的实际model没有明确实现forward方法。确保您使用的是实际实现的模型。