Memory requirement for torch.matmul

Hi Everyone,
I have the following line of code in Pytorch:

W = torch.matmul(mask, V_scaled_diag)

Let us assume that the mask’s dimension is 2304x768; similarly, V_scaled_diag’s dimension is 768x2304 and W is also of dimension 2304x768. The memory required to do this computation should be:

Mem = 3x2304x768x4 Bytes (assuming Float32) + x Bytes

where x Bytes are required to store the gradients, intermediate tensors etc.

Given the size of matrices, is there an easy way to estimate what this extra usage (value of x) might be?

Also, is it possible to check the occupied memory by each tensor in the computation above?

Any inputs are appreciated.

The cublas workspace will use some memory and you could clear it via torch._C._cuda_clearCublasWorkspaces() if needed:

import torch

print(torch.cuda.memory_allocated() / 1024**2)
# 0.0
print(torch.cuda.memory_reserved() / 1024**2)
# 0.0

a = 2304
b = 768
mask = torch.randn(a, b, device="cuda")
V_scaled_diag = torch.randn(b, a, device="cuda")
print(torch.cuda.memory_allocated() / 1024**2)
# 13.5
print(torch.cuda.memory_reserved() / 1024**2)
# 20.0
print("expected: {}MB".format((a*b*4 + b*a*4)/1024**2))
#expected: 13.5MB

W = torch.matmul(mask, V_scaled_diag)
print(torch.cuda.memory_allocated() / 1024**2)
# 41.875
print(torch.cuda.memory_reserved() / 1024**2)
# 62.0
actual = (a*b*4 + b*a*4 + a*a*4)/1024**2
print("expected: {}MB".format(actual))
# expected: 33.75MB
print("delta: {}MB".format(actual - torch.cuda.memory_allocated() / 1024**2))
# delta: -8.125MB

# release cublas workspace
torch._C._cuda_clearCublasWorkspaces()
print(torch.cuda.memory_allocated() / 1024**2)
# 33.75
print(torch.cuda.memory_reserved() / 1024**2)
# 62.0
print("expected: {}MB".format((a*b*4 + b*a*4 + a*a*4)/1024**2))
# expected: 33.75MB

Thank you @ptrblck.

I would like to understand better how Pytorch allocates memory to intermediate or backprop tensors so that I can justify one implementation compared to another. Do you have any recommendations?

Intermediate activations are stored if a differentiable computation graph is created and if these activations are needed for the gradient calculation during the backward call. The derivatives.yaml file defines which tensors are needed.
Thus it depends a bit on your actual use case and a small example would be great to have.