Do I need to also call detach() in a torch.no_grad() scope?

My concern is memory leaks and cuda memory errors, and I’m still a new pytorch user, so I was curious about this.

I want to evaluate my model in a no_grad scope and save intermediate variables from it into lists outside of the scope of the block. I don’t want to accidentally store a reference to the computation block and cause cuda out of memory errors.

Do I need to call detach(), or will the context manager take care of it?

Hi,

No there is no need to use .detach() within the no_grad block because no autograd info is created.