I know input.new() will create a new tensor of the same dtype to input.
But what I want to do is to create a new tensor on the same device to input, but has different data types.
for example, input is torch.cuda.FloatTensor
but I wanna something like torch.cuda.LongTensor
Is there any way to do it without if_else wrt use_cuda ?
input.new().long() should do it. Type casts retain the device, and if you give no arguments to new the tensor isn’t going to have any memory allocated, so the cast is nearly free.
Hi! I’m sorry to reply to an old post but I thought replying here would be better than starting a new topic.
What if you do give arguments to new, for instance to create 0’s or from a numpy list of data? I found some ways but I’m not sure what’s the best way to do it.
So let’s say we have given X:
device = 1
X = torch.rand(10).cuda(device) # or X = torch.rand(10)
How to make code below agnostic to whether X is on CPU/GPU and which device?
# Fill with numpy data
data = np.array([1, 2, 3]) # For example indices calculated by custom Python procedure
X.new().long().new(data) # Works on CPU, error on GPU?
X.new().long().new(*data.shape).copy_(torch.from_numpy(data)) # Works but verbose
X.new(data.astype(float)).long() # Inefficient, lossy?
# X.new(data) # Error
# Or fill with zeros
X.new(10).long().zero_() # Works, but inefficient?
X.new().long().new(10).zero_() # Works, but verbose?
@smth I’m sorry if it was not clear. Actually I meant that X can be on any device, then how to construct a Tensor with numpy data or 0’s on that same device. I edited my post, please let me know if I should open a new topic for this.