I was wondering how are pytorch tensors mapped in memory.
For eg.
b = torch.rand(10)
Now if I do
>>> id(b[0])
140426187280456
>>> id(b[1])
140426187280456
Both of them are exactly same memory addresses?
I was hoping given that the values inside b
are float32
, the id
's would be separated by 4 bytes.
Am I wrong somewhere?
I wanted to know how is a torch tensor mapped inside the memory?
Thank you!