Cross Hair filters from DeepVesselNet (memory not decreasing)

Hi all,
I am using the Cross Hair filter approach from DeepVesselNet for an instance segmentation project with UNet. The code provided by DeepVesselNet is implemented for Keras and I am trying to do the same in PyTorch. I implemented the convolution with cross filters as follows:

class Conv3dCH(nn.Module):
    def __init__(self, in_channels: int, out_channels: int, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', device=None, dtype=None):
        super(Conv3dCH, self).__init__()
        self.convx = nn.Conv3d(in_channels, out_channels, (kernel_size[0],1,kernel_size[2]), stride=stride, padding=(padding, 0, padding), dilation=dilation, groups=groups, bias=False)
        self.convy = nn.Conv3d(in_channels, out_channels, (kernel_size[0],kernel_size[1],1), stride=stride, padding=(padding, padding, 0), dilation=dilation, groups=groups, bias=False)
        self.convz = nn.Conv3d(in_channels, out_channels, (1,kernel_size[1],kernel_size[2]), stride=stride, padding=(0, padding, padding), dilation=dilation, groups=groups, bias=False)

    def forward(self, x):
        cx = self.convx(x)
        cy = self.convy(x)
        cz = self.convz(x)
        out = torch.add(cx, cy)
        out = torch.add(out, cz)
        return out

Nevertheless, it seems that having 3 nn.Conv3d() with 2D filters uses more memory than a torch.nn.Conv3d(). Could this be true?

In this case, DeepVesselNets cross-hair’s advantage in memory and running time due to the decreased number of parameters would not happen. Could someone help with some insights on these and some better approach in PyTorch? Thanks!

Yes, I would assume this to be the case as the majority of the used memory would come from the intermediate forward activations while the actual filters are tiny. This of course depends on the input shape, but I also assume it’s larger than the kernel.

This also seems to be the same question as asked here.

1 Like