How to automatically free intermediate tensors in memory?

Hi, I notice my jupyter notebook is using super large memory (~40GB) when it is running, and after using the tool here: How to debug causes of GPU memory leaks? - #2 by SpandanMadan, I found most memory is used by some intermediate tensor variables. For instance, if I have a large numpy array X, and pass it to a network net as net.cpu()(torch.tensor(X)), then this intermediate variable torch.tensor(X) stays in memory forever even after this function call finished.

I am wondering if there is a way to keep only “named” variables/tensors in memory, while automatically free anything that are intermediate results?

Thank you!

Hey! I have this query too. I am looking forward to a reply.
Thanks.

Intermediate tensors will be freed, once backward() (without retain_graph=True) is used and these intermediates are not needed for the gradient calculation anymore.
If you don’t need to compute the gradients and call backward(), you can wrap the forward pass into a with torch.no_grad() block, which would not store intermediate activations.

CC @Shanbhag