List all the tensors and their memory allocation

You could use a similar approach as described in this post to get all Python tensors.
However, this would not return the tensors allocated in the backend, so you might (additionally) want to check the memory usage via e.g. print(torch.cuda.memory_summary()) to narrow down where you are running out of memory.

2 Likes