Save_for_backward in no_grad mode

Does save_for_backward prevent memory reuse in torch.no_grad() mode? Should I skip save_for_backward in eval, no_grad mode?

save_for_backward will have no effect when you run in no_grad mode. You don’t need to worry about it.