My question is: What happens to memory region of tensor object on CPU when tensor (in pytorch) is sent to Cuda device?
In PyTorch, moving a CPU tensor to GPU does create a copy, so they both live independently after.
If the CPU tensor is still referenced eg
- by your program,
- by autograd (if you used it in a computation involving values that required grad which and PyTorch needs it for computing the backward),
- by a “view”-Tensor (e.g. a slice)
it will be still around.
If the CPU tensor (or the storage, to be more precise) is not used after the copy, it will be freed (with the usual caveats about how memory allocation works with system libraries potentially caching etc.).