Construct new tensor on correct device based on input

I know will create a new tensor of the same dtype to input.

But what I want to do is to create a new tensor on the same device to input, but has different data types.
for example, input is torch.cuda.FloatTensor
but I wanna something like torch.cuda.LongTensor

Is there any way to do it without if_else wrt use_cuda ? should do it. Type casts retain the device, and if you give no arguments to new the tensor isn’t going to have any memory allocated, so the cast is nearly free.


Hi! I’m sorry to reply to an old post but I thought replying here would be better than starting a new topic.

What if you do give arguments to new, for instance to create 0’s or from a numpy list of data? I found some ways but I’m not sure what’s the best way to do it.

So let’s say we have given X:

device = 1
X = torch.rand(10).cuda(device)  # or X = torch.rand(10)

How to make code below agnostic to whether X is on CPU/GPU and which device?

# Fill with numpy data
data = np.array([1, 2, 3])  # For example indices calculated by custom Python procedure  # Works on CPU, error on GPU?*data.shape).copy_(torch.from_numpy(data))  # Works but verbose  # Inefficient, lossy?
#  # Error

# Or fill with zeros  # Works, but inefficient?  # Works, but verbose?
1 Like

@wouter your post is unrelated to the parent topic (parent topic is about GPU tensors)

@smth I’m sorry if it was not clear. Actually I meant that X can be on any device, then how to construct a Tensor with numpy data or 0’s on that same device. I edited my post, please let me know if I should open a new topic for this.

The new() method is deprecated.

What is the new alternative for this use case?

new_tensor =
will change new tensor to be cuda if needed.

new_tensor =
will change new tensor to be cuda if needed.

This creates very ugly (and slow) code such as

    if std.is_cuda:
        eps = torch.FloatTensor(std.size()).cuda().normal_()
        eps = torch.FloatTensor(std.size()).normal_()

instead of the much better
eps =

Isn’t there a better way?

answer to my question