# Multiply two tensors along an axis

Hi, I have a tensor `x1` 4x3x2x2, and a tensor `x2` 4x1. I would like tensor `x1` and `x2` multiply for each element along axis 0 (which has a dimension of 4). Each such multiplication would be between a tensor 3x2x2 and a scalar, so the result would be a tensor 4x3x2x2.

A for loop implementation would be below, is there a better (parallel) implementation, perhaps using one of pytorch multiply functions? Thanks a lot!

``````x1 = torch.rand(4, 3, 2, 2)
x2 = torch.rand(4, 1)

for i in range(4):
print(x1[i]*x2[i])
``````

Hi Yang!

If I understand what you are asking, you could either transpose and use

``````(x1.transpose (0, 3) * x2.squeeze()).transpose (0, 3)
``````

or use `torch.einsum` (“Einstein summation”):

``````torch.einsum ('ijkl, im -> ijkl', x1, x2)
``````

Best.

K. Frank

Thanks a lot!!!

I would appreciate your comment on this: which of these two methods is faster and which has a smaller memory consumption?

Thanks again and best!

Hi Yang!

I don’t know which approach would be more efficient. (I’ve never tested it.)

I could see `torch.einsum()` having a little extra overhead because it has
to parse the `'ijkl, im -> ijkl'` string. It could also be (maybe in certain
situations) significantly less efficient if its generality prevents it from performing
the tensor multiplications in the most optimal way. But maybe it’s smart
enough to figure out the optimal approach.

Best.

K. Frank