Hi, I am trying to implement a convolution using F.conv2d batch-wise. Below is my current implementation with using a for loop. I was wondering whether there are any ways to avoid using the for loop here?
import torch.nn.functional as F
B = torch.randn(50, 26)
c = torch.randn(8, 26, 128, 128)
h = torch.randn(8, 1, 50, 1)
for bs in range(c.size(0)):
# compute Bc
Bc = torch.matmul(B.unsqueeze(0), torch.reshape(c[bs, :, :, :], (c.size(1), c.size(2) * c.size(3))))
# compute hBc
hBc = F.conv2d(Bc.unsqueeze(0), h[bs, :, :, :].unsqueeze(0).flip(2), padding=nPadding).squeeze(0)
# reshape hBc
output = torch.reshape(hBc, (1, h.size(2), c.size(2), c.size(3)))
outputs.append(output)
Your padding is undefined and removing it raises a shape mismatch.
Could you also explain what B and c are as it seems Bc is the actual input?
Is the calculation of Bc relevant for the "batch-wise` conv or just a preprocessing step?
I defined the padding as following for the convolution
nPadding = h.size(2)-1
You are right that Bc is the actual input. However, the matrix multiplication between B and c are performed batch-wise, i.e. each batch c is multiplied by a matrix B. The product Bc then is convolved with h batch-wise. Hope this would be clear.