Creating Variable from existing numba CUDA array

Hi, I’m doing some calculations using numba cuda and was wondering if I could just wrap the cuda array with a FloatTensor and Variable instead of copying it to the host using numba and back to device with pytorch.

Let’s say I do:

from numba import cuda
import numpy as np
x = cuda.device_array((3, 9, 11), dtype=np.float32)

How can I wrap this array with pytorch?
I am able to get the ctypes pointer of the array with x.device_ctypes_pointer but I don’t know how to proceed from there to pass it to a torch.cuda.FloatTensor

As far as I can tell, this code here does the opposite:


I am not sure if that would be possible.
What they do in the link you showed is that they create a torch tensor, convert it, and then fill the numba array in it’s function (which fills also the torch tensor).

You could do a similar thing by having a torch buffer that you convert to numba and then copy your existing numba array into it. This will cost you at most one extra gpu-gpu copy of your tensor. If you can make your function write the result directly into the buffer, then this will work for free.


Thanks! That’s a good idea :slight_smile:
In the end I realized that since I am manually initializing the numba cuda array I can just use the code in the above link and first create the pytorch array and convert it to the numba cuda array. It works great!

Hi~ I’m also in the situation that need to convert the numba CUDA array to pytorch tensor. But I can not realize it. Could you share the code or the link?