How are Tensors on CPU created?


I am trying to implement basic support for having Persistent Memory (Intel Optane PMem) instead of DRAM as the underlying storage of a Tensor. PMem memory regions are basically also just regular pointers.

However, I am currently not able to understand where the actual memory of a Tensor is allocated. Is there any documentation on that? I found that there are several libraries that are actually backing tensors (C10, ATen), but without a first introduction it’s hard to dive into that level of detail. For example, if I torch.load() a tensor from disk, how does the call stack work?

Thank you so much for giving me initial pointers.