Why hash values of a same tensors change if we affect it to different variables?

Hi all,

Here is a very simple example to understand my question:

import torch

a = torch.arange(5).cuda()
b = torch.arange(5).cuda()

print(a.__hash__())
print(b.__hash__())

If you run this you will see that the hash value of a is different from the one of b which is weird to me. Is there any explanation for this? And is there a way to avoid this behavior? (I would like to update a dictionary with tensor valued keys and because of this, the keys will appears multiple times).

Thanks a lot in advance for your help!

Samuel

hash of Tensor is by object, not by value. It’s not reasonable to hash by value because (1) it’s expensive for large tensors and (2) every hash of a cuda tensor will be a synchronization.

1 Like

Thank you for your answer I understand. But on the other hand it means that I need to update my dictionary on the CPU and so make a round trip GPU-CPU with all my tensors. Anyway thanks again.

1 Like