Read CudaArray for prediction/inference/image process

Is it possible to read directly from a CudaArray when using the C++ API?

I would like to build a plugin system for an existing application where I can access data directly on the GPU (that already exists on the GPU). I’m trying devise a system where I don’t have copy to from the CPU for processing or predicting using texture data.

Is this possible with PyTorch?


You could use DLManagedTensor and fromDLPack as an API or dive into the implementation and use from_blob. Note that it means that you need to ensure the GPU memory blob is available for as long as the Tensor lives / until the deleter is called.

Best regards


Great thanks! I’ll look into those methods.