Hi! My question is about sharing filters in nn.Conv2d/nn.f.conv2d.
Say I have a 3 channel input shape: (1,3,4,4). And I have a (1,2,4,4) filter. Here the filter size is (batch_size, number of filters, H, W). What I hope to achieve is that let the first filter to convlve with each of the three channels and get three (1,1) results. Then I repeat this with the second filter, get three (1,1) results again. In the end, I am hoping to get some results like (1,2*3,1,1).
Are there some pytorch built-in functions that can achieve this? Or I need to use for loops on both input channels and weights to achieve this.
The reason why I want to do this is I am trying to manually backpropagate the gradients through conv2d. I saw some articles prove that the dL/dW is the convolution between X and dL/dH from previous layer. If there are better ways to manually achieve backpropagation on convolution layer, please let me know!
Thank you very much! And this is my first time making a post, please forgive me for potential format mistakes!