Multiplying tensor in place

With two tensors

a = torch.ones([256, 512, 32])
b = torch.ones([32, 2])

what is the most efficient way to broadcast b onto every associated entry in a, producing a result with shape

[256, 512, 32, 2] ?

Is there an inplace variant maybe? I want to avoid making copies as much as possible…

a.unsqueeze(-1).mul(b.unsqueeze(0).unsqueeze(0))

But it’s sort of hard to see in your example because you are multiplying everything by ones. You can open a REPL. I also think with pytorch 1.3 you could use named tensors and just make sure that a's dimensions are named something like N, C, H and b's dimensions are H, W… basically just ensure that the dimensions align correctly.

thank you, but do I really need those last two unsqueezes?


a = torch.ones([256, 512, 32]).unsqueeze(-1)
b = torch.ones([32, 2])
a * b

also, wouldnt be mul_ be more efficient here?

The output is not the same size as either of the inputs. So you can’t really do it inplace.