Regarding the use of
ctx.save_for_backward
, according to this comment, memory leak doesn’t seem to occur starting from PyTorch 0.4. Is that true?
That is for intermediary results, not input/output. You have to use save_for_backward()
for these.
Regarding the use of
ctx.save_for_backward
, according to this comment, memory leak doesn’t seem to occur starting from PyTorch 0.4. Is that true?
That is for intermediary results, not input/output. You have to use save_for_backward()
for these.