My concern is memory leaks and cuda memory errors, and I’m still a new pytorch user, so I was curious about this.
I want to evaluate my model in a no_grad
scope and save intermediate variables from it into lists outside of the scope of the block. I don’t want to accidentally store a reference to the computation block and cause cuda out of memory errors.
Do I need to call detach()
, or will the context manager take care of it?