Hi; wondering if I can do the same thing for nn.Conv2D and nn.ConvTranspose2d ?
Right now; I did x = torch.randn(batch_size, add_features, channels, in_features, in_features) but it raises error states that RuntimeError: Expected 4-dimensional input for 4-dimensional weight
If you would like to apply the same kernels on add_features, you could push the add_features dimension to the batch dim and apply the standard nn.Conv2d.
Here is a small dummy example with the comparison of the manual approach: