How could I manage tensor memory in C++

I am trying to create tensors in python side and manage the memory in C++ side.

For example, I have the following C++ code trying to maintain the memory information.

void RegisterTensor(const std::uint32_t tensor_id, torch::Tensor& buffer){
   tensor_to_id_.insert(std::make_pair(&buffer, tensor_id)); // tensor_to_id_ is an unordered_map, I want to change the tensor in the future in C++ side, so I maintain this information
   cout << "tesor id " << id << " address " << (void*)&buffer; 

The code is built as a torch extension which can be loaded in python.
However, when I run the following code, something got wrong.

tensor1 = torch.randn(10, 10).pin_memory()
tensor2 = torch.randn(10, 10).pin_memory()
print("tensor1 address:", hex(id(tensor1)))
print("tensor2 address:", hex(id(tensor2)))
lib.register(tensor1, 1)
lib.register(tensor2, 2)

I expect to get the different address of the tensor, but the output is not as expected.

tensor1 address: 0x7f1eedb2e930
tensor2 address: 0x7f1eedb2ea20
tensor id 1 address 0x7fff27333998
tensor id 2 address 0x7fff27333998

Could anybody explain to me why the memory address are always the same in C++ side and how could I properly manage the memory in C++ side? Thank you!

Does anyone have any suggestions?

I’m not sure you should use id() if your goal is to inspect memory addresses. Could you check if the .data_ptr() method returns different addresses for your tensors as expected?