Convert torch tensors directly to cupy tensors?

I know jumping through the conversion hoops with cupy.array(torch_tensor.cpu().numpy()) is one option, but since the tensor is already in gpu memory, is there any equivalent to a .cupy() to directly get it into cupy?

Thanks

1 Like

yes, there’s an easy way. See this example:

1 Like

If you need to use cupy in order to run a kernel, like in szagoruyko’s gist, what Soumith posted is what you want. But that doesn’t create a full-fledged cupy ndarray object; to do that you’d need to replicate the functionality of torch.tensor.numpy(). In particular you need to account for the fact that numpy/cupy strides use bytes while torch strides use element counts; other than that you can use the same memory pointer etc.

1 Like

Great! Thank you both.

But that doesn’t create a full-fledged cupy ndarray object

For now, I do just need to run a kernel, so the gist should work, but thanks for pointing this out.

Can we now directly convert between cupy and pytorch tensors?

For example, this seem to work in Pytorch 1.4:

>>> import torch
>>> import cupy
>>>
>>> t = torch.cuda.ByteTensor([2, 22, 222])
>>> c = cupy.asarray(t)
>>> c_bits = cupy.unpackbits(c)
>>> t_bits = torch.as_tensor(c_bits, device="cuda")
>>>
>>> t_bits.view(-1, 8)
tensor([[0, 0, 0, 0, 0, 0, 1, 0],
        [0, 0, 0, 1, 0, 1, 1, 0],
        [1, 1, 0, 1, 1, 1, 1, 0]], device='cuda:0', dtype=torch.uint8)

Are there any issues with this code? Do the arrays remain on GPU during this conversion? Is tensor t being copied during conversion to cupy? I assume so, because I doubt Pytorch and CuPy are able to share the same cuda memory.

1 Like

I think DLPack is the solution. According to CuPy documentation, you can do:

import cupy
import torch

from torch.utils.dlpack import to_dlpack
from torch.utils.dlpack import from_dlpack

# Create a PyTorch tensor.
tx1 = torch.randn(1, 2, 3, 4).cuda()

# Convert it into a DLPack tensor.
dx = to_dlpack(tx1)

# Convert it into a CuPy array.
cx = cupy.fromDlpack(dx)

# Convert it back to a PyTorch tensor.
tx2 = from_dlpack(cx.toDlpack())

You don’t need dlpack. Direct conversion as in my example works fine.

I tried using @michaelklachko method and it works. However, the output of cupy.unpackbits unpacks all the bits which might not be what you might want. @senpai-a suggestion of using DLPack works well if you want to preserve the value with the right dtype.

You misunderstood my example. You don’t need unpackbits or anything like that to pass data from CuPy to Pytorch. That was just something I wanted to do.

>>> t1 = torch.cuda.ByteTensor([2, 22, 222])
>>> c1 = cupy.asarray(t1)
>>> t2 = torch.as_tensor(c, device="cuda")
>>> c2 = cupy.asarray(t2)

>>> t1
tensor([  2,  22, 222], device='cuda:0', dtype=torch.uint8)
>>> t2
tensor([  2,  22, 222], device='cuda:0', dtype=torch.uint8)
>>> c1
array([  2,  22, 222], dtype=uint8)
>>> c2
array([  2,  22, 222], dtype=uint8)

Note the dtype is preserved.

2 Likes

Sorry @michaelklachko, you are right. I misunderstood your approach. Thank you for the clarification.