Is there a way to control how memory is allocated, deleted and reallocated for Tensors? Is very any doc or guide on how Torch memory allocation happens behind the scenes? I am on Linux, using Libtorch.
dfalbel
(Daniel Falbel)
January 30, 2021, 11:16am
2
The CPU memory allocator is implemented here:
struct C10_API DefaultCPUAllocator final : at::Allocator { DefaultCPUAllocator() {} ~DefaultCPUAllocator() override {} at::DataPtr allocate(size_t nbytes) const override { void* data = alloc_cpu(nbytes); profiledCPUMemoryReporter().New(data, nbytes); return {data, data, &ReportAndDelete, at::Device(at::DeviceType::CPU)}; } static void ReportAndDelete(void* ptr) { if (!ptr) { return; } profiledCPUMemoryReporter().Delete(ptr); free_cpu(ptr); } at::DeleterFnPtr raw_deleter() const override { return &ReportAndDelete; } };
AFAICT you can control how memory is allocated by implementing another at::Allocator
and registering it.