PyTorch - CUDA Interoperability

Does anyone have any idea what the recommended way of going from PyTorch to CUDA is, either for the Python or the C++ side?

I want to be able to use my PyTorch tensors in CUDA, and I was trying to figure out whether to use CuPy or the C++ API, and how to correctly deal with this.

I was wondering if anyone has any insights into the management of context creations and streams with respect to PyTorch and CUDA, and the correct way to handle them if I want to be able to use the low-level CUDA functions. Any examples or pointers would be really helpful!

1 Like