Convolve 3D tensor along one dimension

Say you had a 3D tensor (batch size = 1):

a = torch.rand(1,3,6,6)

and you wanted to smooth that tensor along the channel axis (i.e. axis 1), with a Gaussian kernel, without smoothing along the 2nd and 3rd axes, how would one do this?

I’ve seen similar separate posts to this whereby you create a Gaussian kernel of specified size and then convolve your tensor using torch.nn.Conv3D(a, kernel).

However, this is a 3D tensor so I assume you would have to create your kernel as a 3D Gaussian where 2 of the axes are simply ones while the third follows a Gaussian function.

Could someone please show me how to do this correctly within a network (this is an assumption that the innit function contains convolutional layers being instantiated in the following way as

self.conv0 = torch.nn.Conv2d(120,16,3,1,padding=1)
self.conv1 = torch.nn.Conv2d(16,32,3,1,padding=1)

etc. and that you would like to perform this convolution as part of the forward procedure in the network).

Many thanks in advance for help on this!

Edit for clarity: like in this post, my initial attempts to do this involved modifying the kernel to look as follows:

self.kernel = torch.FloatTensor([[[0.006, 0.061, 0.242, 0.383, 0.242, 0.061, 0.006],
                                          [1., 1., 1., 1., 1., 1., 1.],
                                          [1., 1., 1., 1., 1., 1., 1.]]])

I think it would be easier to slice along the dimension you want and then stack. Creating a kernel makes no sense as you will be wasting computation and memory (applying the identity kernel).

Edit:
I assumed that you were talking about image channels but it’s not the case, my bad.
Have a look at this library
https://torchgeometry.readthedocs.io/en/latest/filters.html
they implement several filters.

1 Like

Thanks for the reply, could you expand on what you mean by slicing and then stacking a little? I don’t see how those operations result in a desired smoothing.

Reply to edit: Oooh Kornia looks really useful and light actually. Do you know if it is supported in tracking gradients for back propagation? (i.e. will it break the gradient tree?)

Yes it does, btw I was replying for the 3d extension :slight_smile:

Thanks, I’m giving that a try now :slight_smile: Can’t seem to find the 3D extension but I’ll keep at it :sweat_smile:

Soo here it comes

import torch

tensor_in = torch.rand(1, 1, 6, 6, 6)
kernel = torch.zeros(1, 1, 3, 3, 3)  # 3x3x3 kernel
# kernel shape out_channels,in_channels, t,h,w
# Identity kernel

kernel[:, :, 1, 1, 1] = 1

tensor_out = torch.nn.functional.conv3d(tensor_in, kernel, padding=1)

print((tensor_in == tensor_out).all())

First of all think about how to build the identity kernel, which is the one with all zeros but the middle element. This is a just sanity check to be sure we are properly initilizing the kernel.

If you want to apply your custom kernel to use some smoothing you will have to think in which axis do you want to apply it.
Then choose a proper kernel size which will be 1 for those dimensions in which you aren’t convolving and N for the chosen one.
Lastly be aware of the proper padding size not to downsample the tensor.

In example let’s apply this kernel [-1,1,-1] to the temporal axis:

import torch

tensor_in = torch.rand(1, 1, 6, 6, 6)
kernel = torch.zeros(1, 1, 3, 1, 1)  # 3x1x1 kernel
# kernel shape out_channels,in_channels, t,h,w


kernel[0, 0, :, 0,0] = torch.FloatTensor([-1, 1, -1])


# Padding 1,0,0 required to keep dimensionality
# padding is (1,0,0) in this case
pad = (torch.tensor(kernel.size()[2:]) - 1) // 2
pad = tuple(pad.tolist())
tensor_out = torch.nn.functional.conv3d(tensor_in, kernel, padding=pad)

You can apply the same logic for any axis.
The last but not the least, PyTorch uses Cross-correlation! https://pytorch.org/docs/stable/nn.html#torch.nn.Conv3d
So keep that in mind at the time of making kernels.

1 Like

Sorry for the delay! That all appears to work fine, thanks for taking the time to help! :smiley: