So i have problem with multiplying matrices.

I have one 4 dimensional tensor with dimensions 3x6x4x4. I want to get dot product of all tensors so i can get a final result as 3x1x4x4 tensor (or 3x4x4 it doesn’t matter).

I want to avoid using for loop and iterating through the first dimension.

Can i perform this operation. I tried torch.bmm but it only works on 3D tensors.

Thanks

You can use `torch.prod`

```
# This ↓
out_0 = torch.prod(a, dim=1)
# This is the same as multiplying everything along the dimension specified
out_1 = a[:, 0, :, :] * a[:, 1, :, :] * a[:, 2, :, :] * a[:, 3, :, :] * a[:, 4, :, :] * a[:, 5, :, :]
print(torch.all(out == b))
# Output:
tensor(True)
```

1 Like

Does that perform dot product of tensors or standard multiplication? I need dot product for my specific problem.

That performs a standard multiplication, for matrix multiplication you can use one of these

```
a = torch.rand(3, 6, 4, 4)
out_1 = (((((a[:, 0, :, :] @ a[:, 1, :, :]) @ a[:, 2, :, :]) @ a[:, 3, :, :]) @ a[:, 4, :, :]) @ a[:, 5, :, :])
out_2 = torch.einsum('abcd, abde, abef, abfg, abgh, abhi->aci', torch.split(a, 1, dim=1))
print(torch.all(out_1==out_2))
# Output:
# tensor(True)
```

I should have maybe pointed out that my tensors are of size N * M * 4 * 4. So i can’t hardcode values because they will change based on the input size (sometimes it can be 3 x 6 x 4 x 4, but sometimes it can be 100 x 50 x 4 x 4 so i dont think einsum would be suitable here). Any workaround for that?

Also one more question. Is einsum much faster than for loop in this occasion?

This involves a small for loop, but it should work

```
a_split = list(torch.split(a, 1, dim=1))
for i in range(len(a_split)):
a_split.insert(2*i+1, [..., i, i+1])
a_split.insert(2*i+2, [..., 0, i+1])
out_3 = torch.einsum(*a_split)
```

You could `timeit`

and see if it really improves the performance, but I am not sure.