I converted a PyTorch
Tensor into a NumPy
ndarray, and as it’s shown below ‘
a’ and ‘
b’ both have different ids so they point to different memory locations.
I know in PyTorch underscore indicates in-place operation so
a.add_(1) does not change the memory location of
a, but why does it change the content of the array
b (due to the difference in id)?
import torch a = torch.ones(5) b = a.numpy() a.add_(1) print(id(a), id(b)) print(a) print(b)
139851293933696 139851293925904 tensor([2., 2., 2., 2., 2.]) [2. 2. 2. 2. 2.]
Converting a torch Tensor to a NumPy array and vice versa is a breeze.
The torch Tensor and NumPy array will share their underlying memory
locations, and changing one will change the other.
(They have different IDs so must be independent)