Control Tensor Memory Allocation?

Is there a way to control how memory is allocated, deleted and reallocated for Tensors? Is very any doc or guide on how Torch memory allocation happens behind the scenes? I am on Linux, using Libtorch.

The CPU memory allocator is implemented here:

AFAICT you can control how memory is allocated by implementing another at::Allocator and registering it.