This is well explained in this answer.
So when you do this ↓ you might get the same wrapper and just update the offset. This is not always the case. If you run your code multiple times, you will see that sometimes you get different addresses.
As mentioned in the previous link, what remains stable is the address of tensor´s storage, which can be accessed with a.storage().data_ptr()
.
# Here we are referencing a new wrapper that shares memory with 'a'
a = torch.tensor([[1, 2, 3], [4, 5, 6]])
print(a[0].data_ptr())
print(a[1].data_ptr())
# Output:
# That is why we get two different results
# a[0]: 114012224
# a[1]: 114012248
# Here we are referencing where the actual tensor is stored
a = torch.tensor([[1, 2, 3], [4, 5, 6]])
print(a[0].storage().data_ptr())
print(a[1].storage().data_ptr())
# Output:
# a[0]: 114011904
# a[1]: 114011904
But now, when we use data_ptr we get the address of the first element of the tensor. Since it is the same tensor, the first element must be the same.
Here are some other useful answers
https://discuss.pytorch.org/t/why-is-id-tensors-storage-different-every-time/100478
https://discuss.pytorch.org/t/any-way-to-check-if-two-tensors-have-the-same-base/44310/8