Batch conv2d with multiple kernels of batch size

Hi, I wonder if there is a way to apply conv2d to input batch with multiple kernels.
I want to apply conv2d operation for each input of the minibatch with its corresponding kernel where there are B input feature maps and B kernels within each minibatch.

This is similar to torch.bmm operation where each matrix in a batch is multiplied by a different weight matrix. I found this backend function ‘THCudaTensor_conv2Dmm’. Is this function related to what I want? If so, how can I access it in my python code?

Thanks,
Paul

1 Like

You can use the groups parameter in nn.Conv2d to achieve this. Try to set groups=in_channels and each input channel will have its own set of filters.
Note that both in_channels and out_channels have to be divisible by groups.

2 Likes

This is cool and I think it resolves my issue. I will try that. Thank you!

Can you please explain how I can group such that the sample code works?

A = torch.randn(5, 512, 7, 7) # minibatch, in_channels, iH, iW
B = torch.randn(5, 512, 1, 1) # minibatch, filter_channels, h, w 

# Aim is to run conv2d at batch level
out = F.conv2d(A, B)
# However, It applies all the filters to all the batch samples of A
# Desired output shape = [5, 1, 7, 7]
# Obtained output shape = [5, 5, 7, 7]

Using a groups = 512 or 5 raises RuntimeError due to shape mismatch.

I’ll answer my own query

A1 = A.view(A.shape[0]*A.shape[1], A.shape[2], A.shape[3]).unsqueeze(0)

out = F.conv2d(A1, B, groups=B.shape[0]).permute(1, 0, 2 ,3)
print(out.shape)

# Result: [5, 1, 7, 7]

Verified by computing in batch and individually for each of the 5 entries in the batch and comparing the results.

the filter should be in shape ( filter_channels, in_channels, h, w)