Using same weights across all channels (not grouping)

Hey everyone,

very simple question.
I have an input [Batch, C_1, H, W],
and want an output [Batch, C_1, H, 1],
and I am going to use a Conv2D with kernel shape (1, W).
However, I only want to use W parameters for this.

If I use the group option I can decrease my weights, and minimally I will have for every channel a set of own weights. However, this gives me C_1 x W weights.

Is there anything better than using a loop or am I blind.

Kind regards,
Magnus

the right thing to do here is to have a nn.Conv2d(1, 1, (1, W)) and then fold the channels dim into batch:

input = Variable(torch.randn(Batch, C_1, H, W))
m = nn.Conv2d(1, 1, (1, W))

output = m(input.view(Batch * C_1, 1, H, W)).view(Batch, C_1, H, 1)

2 Likes