在医学图像中使用 CNN 进行 3D 图像配准

时间:2021-03-27 23:59:22

标签: python deep-learning pytorch image-registration

最近,我正在研究医学图像中的 3D 图像配准。我想尝试两个大主题,隐式和显式图像配准。我将把 5 个不同运动状态的图像注册到 1 个图像中。 对于隐式图像配准,输入通道为5,输出通道为1。

假设我有 5 个运动状态,我想将前 4 个运动状态图像注册到最后一个运动状态图像。由于我将最后一个 bin 的目标图像分别输入到输入中,而输入的最后一个运动状态图像与目标图像略有不同,因此我向网络输入了 5 个通道的输入,通过网络,网络输出1路输出。

我使用 ResUNet 作为网络,我有 65 个 3D 图像数据集。学习率为 1e-2~1e-3(仍在寻找最佳学习率),特征数为 32 或 64。 我使用了 2 个损失函数,即像素损失(L1)和预训练的感知损失

我遇到的问题是,输出与目标不相似,而是输入

Here is image of the input, output and target with black lines

这是我的代码 训练部分

def train(epoch, data_loader, model, perceptual_loss, pixel_loss, optimizer):
model.train()
epoch_loss = []
with tqdm(total=len(data_loader), desc=f'Epoch {epoch}/{args.epochs}', unit='img') as pbar:
    for i, (input_img, target, meta) in enumerate(data_loader):
        input_img, target = input_img.to(DEVICE), target.to(DEVICE)

        output = model(input_img)

        output_masked, target_masked = utils.empty_masking(output, target, DEVICE)
        loss = pixel_loss(output_masked, target_masked)
        output_reshaped = output_masked.permute(2, 0, 1, 3, 4).reshape(-1, 1, *args.image_size)
        target_reshaped = target_masked.permute(2, 0, 1, 3, 4).reshape(-1, 1, *args.image_size)
        if perceptual_loss is not None:
            loss += args.pt_balance * perceptual_loss.forward(output_reshaped, target_reshaped, normalize=True).mean()

        pbar.set_postfix(**{'loss (batch)': loss.item(), 'learning rate': optimizer.param_groups[0]['lr']})

        optimizer.zero_grad()
        loss.backward()
        nn.utils.clip_grad_value_(model.parameters(), 0.1)  # gradient clipping
        optimizer.step()

        pbar.update(input_img.shape[0])

        epoch_loss.append(loss.item())
        print('Epoch[{}]({}/{}): Loss: {:.4f}'.format(epoch, i, len(data_loader), loss.item()))

    avg_loss = np.mean(epoch_loss)
    print('Epoch {} complete. Training Loss: {:.4f}'.format(epoch, avg_loss))
return avg_loss

方法“utils.empty_masking()”控制不更新目标图像中存在黑线的权重,以便权重不受零值的影响。

这是ResUNet网络的代码。

class ResUNet(nn.Module):
def __init__(self, in_channels, out_channels, n_features, trilinear=True, kernel=3, pad=1):
    super(ResUNet, self).__init__()
    self.n_channels = in_channels
    self.n_classes = out_channels
    self.trilinear = trilinear
    factor = 2 if trilinear else 1

    self.inc = DoubleConv(in_channels, n_features, kernel=kernel, pad=pad)
    self.down1 = Down(n_features, n_features * 2, kernel=kernel, pad=pad) # 32 64
    self.down2 = Down(n_features * 2, n_features * 4, kernel=kernel, pad=pad) #64 128
    self.down3 = Down(n_features * 4, n_features * 8, kernel=kernel, pad=pad) # 128 256
    self.down4 = Down(n_features * 8, n_features * 16 // factor, kernel=kernel, pad=pad) # 256 512
    self.up1 = Up(n_features * 16, n_features * 8 // factor, trilinear, kernel=kernel, pad=pad) # 512 256
    self.up2 = Up(n_features * 8, n_features * 4 // factor, trilinear, kernel=kernel, pad=pad) # 256 128
    self.up3 = Up(n_features * 4, n_features * 2 // factor, trilinear, kernel=kernel, pad=pad) # 128 64
    self.up4 = Up(n_features * 2, n_features, trilinear, kernel=kernel, pad=pad) # 64 32
    self.outc = OutConv(n_features, out_channels) # 32 1

def forward(self, x):
    x1 = self.inc(x)
    x2 = self.down1(x1)
    x3 = self.down2(x2)
    x4 = self.down3(x3)
    x5 = self.down4(x4)
    x6 = self.up1(x5, x4)
    x7 = self.up2(x6, x3)
    x8 = self.up3(x7, x2)
    x9 = self.up4(x8, x1)
    x10 = self.outc(x9)
    logits = torch.clamp(x10, 0, 1)

    return logits

我应该怎么做才能正确注册图像? 网络的输出应该类似于目标而不是输入。

非常感谢您 :)

0 个答案:

没有答案