Using pytorch allocators for custom extensions

Hello everyone. What is the current/recommended API to use for custom extensions that need to perform allocations and want to use/share the memory allocation pool created by the PyTorch library? For instance functions/operators that uses the Thrust library uses a series of wrappers that finally calls c10::cuda::CUDACachingAllocator::raw_alloc and c10::cuda::CUDACachingAllocator::raw_delete. These two functions should be ok to be called from a external code/library?

thanks

Personally, I’d just allocate a Tensor and keep that alive.

1 Like

Hello @tom, thanks for your reply. The reason that I was going to use this kind of allocation API is that the allocations is controlled by a library and similarly to thrust case the library exposes function pointers to allocation and de allocation functions. What would be the main downsides of using this API? I was not sure if these functions are actually public ones.

thanks

Apparently they are used by other libraries, too, see the comments here (and also more use-cases towards the bottom): Expose `CUDACachingAllocator` `raw_alloc` and `raw_delete` to python by emcastillo · Pull Request #33860 · pytorch/pytorch · GitHub

Personally, I would not do it because it’s not the level of API abstraction I want to work with. For most of my interop needs, I found ways to make DLMTensors do what I need, but hey, it’s your code and it looks as safe as any other low level function. :slight_smile:

1 Like