Hello,they give same result,any particular case to prefer detach().cpu() to cpu().detach(),or vice versa?
I would prefer to use detach().cpu()
, as this would detach the tensor already on the GPU and the cpu()
operation won’t be tracked by Autograd. That being said, it seems to be a minor change and you shouldn’t see any differences in using any of these two approaches.
1 Like
Thanks,is there any case to prefer with cpu() than detach()?
I don’t understand the question. The cpu()
operation transfers the tensor to the CPU (if not already there), while detach()
cuts the computation graph. Could you explain the question a bit more?
1 Like