Torch.matmul(): operate on highest dimension

For high-dimensional tensors, the matrix multiplication can only be operated on the last two dimensions, which requires the previous dimensions to be equal. But in my research, matrix multiplication on the former dimensions also makes sense.
For example, we can do this:

# Mat_A's size is (10, 20, 2, 32)
# Mat_B's size is (10, 20, 32, 3)
# torch.matmul(Mat_A, Mat_b)'s size is (10, 20, 2, 3)
torch.matmul(Mat_A, Mat_b)

But how can we implement this:

# Mat_a's size is (10, 20, 30)
# Mat_b's size is (5, 10)
# torch.some_operate(Mat_b, Mat_a)'s size is (5, 20, 30)
torch.some_operate(Mat_b, Mat_a)

I can use loops to do the calculations, but it’s not very elegant. This is a simple example:

Mat_a is:
[[a, b],
 [c, d]]
Mat_b is:
[[A, B],
 [C, D]]
then the result is:
[[aA+bC, aB+bD],
 [cA+dC, cB+dD]]
where a, b, c, d are scalars and A, B, C, D are vectors.

torch.einsum('ijk,li->ljk', Mat_a, Mat_b)
It’ll do the rearranging of dimensions for you.

Best regards

Thomas

Update: See Tim Rocktäschel’s post on einsum for lots of applications.

3 Likes

Wow, I have learned einsum but never thought about this idea. That’s awesome! TX!