How to multiply a weight matrix(2D) with a tensor(3D,but channel is different in different input)

Hi guys,

I got a problem these days when implement my idea in pytorch.

In some layer’s in my network,I got a tensor(ChannelHW), and the Channel is different in different inputs.
I want to multiply a weight matrix(2D) with every Channel in the tensor, that is, Weightsubtensor(1H*W),I want the feature map in every channel multiply with the same weight matrix.
How can I implement my idea in pytorch?
using a for loop?
for i in range(channel):
out[i] = torch.bmm(weight,tensor[i,:,:])
can this implement backpropagation right?

It seems you just want to multiply a tensor of shape [C, H, W] with a tensor of shape [1, H, W].
If so, you can just use this simple code:

x = torch.ones(3, 5, 5)
weight = torch.ones(1, 5, 5) * 2
x * weight
1 Like

I understand want you mean,the weight matrix can be broadcasted.
But in the pytorch code I always see torch.mm(A,B)(or bmm for batch) Instead of AB, and I’m wondering if I use AB, can my network backpropagation right(with Autograd)?

Yes, autograd will catch the multiplication and can backpropagate through it.

1 Like

I understand you question :you want to convert 2D weight matrix to 3D tensor,but every 3D tensor has different dimension,is it right?

Yes!
The 3D weight tensor’s channel is depends on the input channel.