Per channel convolution with same convolution kernel

Given filters F1, F2Fn and a 4D input tensor ([batchsize, channels, height, width]), input, I want to apply all filters over all channels of input individually. If input has c channels, then the output of this operation should be a tensor, say t, of dimensions [c, input.size(0), n, outputHeightAfterConv(input.size(2)), outputWidthAfterConv(input.size(3))]. Note that t[i, :, :, :, :] should be equal to the result of applying convolutions from n filters on channel i of input. Below is my attempt to solve the problem. But because I loop through all channels to get the output, my backward pass over this part takes incredibly long.

conv = torch.nn.Conv2d(1, out_channels, kernel_size, stride, padding, dilation, groups, bias)
temp = torch.empty(x.size(1), x.size(0), self.outChannels, self.getOutputHeight(x.size(2)), self.getOutputWidth(x.size(3)))
for i in range(0, x.size(1)): #loop over number of channels
    temp[i, :, :, :, :] = conv(x[:, i:i+1, :, :])

I have been stuck on this problem for a very long time and am a beginner in PyTorch. Any help would be tremendously appreciated

Treat the input channels as batches of 1 channel images

conv = torch.nn.Conv2d(1, out_channels, kernel_size, stride, padding, dilation, groups, bias)
batchsize = x.shape[0]
temp = conv(x.reshape(-1,1,-1,-1)).reshape(3, batchSize, -1, -1, -1)

NOTE : check the reshapes above, if desired output doesent come change -1 in reshapes appropirately. (also be careful about how reshape is performed)

1 Like