Backward process tracking for unexpected GPU memory increases

Is there any method to show the details when we run loss.backwards()?
Since when I run a single line loss.backwards(), the GPU memory increases a lot (Inference 4GB, when run this line, the GPU memory cost will be 12GB). So I think if I can track the backward graph details then I can find where is the problem.
For now, I could just run this single line and get a bad result…

Thanks very much for any helps.

I think that would be a great feature, but sadly it doesn’t exist (yet).

For now, there is a profiler which helps you find bottlenecks in your model. It gives no info about memory consumption; only time taken. But there is often a correlation between time and memory!

Take a look:
http://pytorch.org/docs/0.3.0/autograd.html#profiler