How to cast a tensor to another type?

if I got a float tensor,but the model needs a double tensor.what should I do to cast the float tensor to double tensor?




Can also do tensor.type(torch.DoubleTensor) or tensor.type('torch.DoubleTensor') if you want to use a string


thanks for your reply.but even I followed your suggestion to cast the type,it seems the error is still there.
My usage is :

x, y = Variable(x.cuda()), Variable(y.cuda())

Although I use print(x.type) before preds=model(x) ,and showed the type of x is torch.cuda.DoubleTensor,but the error RuntimeError: expected Double tensor (got Float tensor) appered every time.
Can you give me some suggestions? Thank you!


@alan_ayu: Is the error really on those lines?

Can you paste a small test case + the actual error?

My experience is that expected Double got Float appears when you try to use NLLLoss or CrossEntropyLoss as to cover all int32, float64 is needed.

Furthermore unless you have a Tesla card, you shouldn’t use DoubleTensor for your X, it’s 32 times slower than float32 on GeForce, Quadro and Titan cards on any recent cards ( Maxwell and Pascal so since 2014).


Yes you are right,there are something wrong in my dataset
Thank you for reminding me

But if your tensor is tensor.cuda.FloatTensor, converting using tensor.DoubleTensor removes cuda. Better to use tensor.double() because it works for both cpu and gpu tensors.


Could you explain how to convert from LongTensor to FloatTensor while keeping the cuda() property intact?
What if we’re talking about Variables type?

1 Like

Hi Royi,
Heres a snippet of my code which does what I believe to be what you want

x = x.type(torch.cuda.FloatTensor)
x_cuda = Variable(x, requires_grad=True).cuda()



Cast seems not to work in pytorch 1.0. Please see the output below. How can I fix that, please?

tensor([[[ 1.5446, 0.3419, 0.1070, -0.6632, 0.5054, 0.7074],
[-0.5460, -0.0041, -0.6613, -1.5072, 0.4836, 3.1626],
[-0.9564, 1.8512, -0.6912, -1.0977, 0.4808, -0.5918],
[-1.3628, 2.2673, -0.9875, 1.0004, 0.1614, -0.4596],
[-2.0670, 1.4336, -1.1763, 0.1440, -0.5740, 0.2190]],

    [[ 1.5446,  0.3419,  0.1070, -0.6632,  0.5054,  0.7074],
     [-0.5460, -0.0041, -0.6613, -1.5072,  0.4836,  3.1626],
     [-0.9564,  1.8512, -0.6912, -1.0977,  0.4808, -0.5918],
     [-1.3628,  2.2673, -0.9875,  1.0004,  0.1614, -0.4596],
     [-2.0670,  1.4336, -1.1763,  0.1440, -0.5740,  0.2190]]],

In modern PyTorch, you just say float_tensor.double() to cast a float tensor to double tensor. There are methods for each type you want to cast to. If, instead, you have a dtype and want to cast to that, say (e.g., your_dtype = torch.float64)


@alan_ayu @ezyang
Isn’t there a method to change dtype of a model?

The .to() method will also work on models and dtypes, e.g. will convert all parameters to float64.


Just an on the go solution

tensor_one.float() : converts the tensor_one type to torch.float32
tensor_one.double() : converts the tensor_one type to torch.float64 : converts the tensor_one type to torch.int32


cast your tensors using .long()

This worked for me.

1 Like

how to do the above conversion in libtorch?

Hi Ptrblck,

I am computing this commnad


error is (I run this code on cpu does not work. I change it to GPU and )

It give me this error. I try all option to change the (P1*P2) to the double but it gave me float again., Would you please help me with that?

Could you post the error message you are seeing as well as the workaround you are trying to use, please?

@ptrblck thanks for pointing to the dtype conversation for the whole model.
After applying it to my model I first received an error that was due to the fact that I did not change the dtype of the input to what the model is now expecting:

RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.cuda.HalfTensor) should be the same

This made sense to me and I then switched the dtype of the input accordingly: but then I receive the following error which causes me trouble:

RuntimeError: expected scalar type Float but found Half

Any help would be much appreciated. P.S. I searched similar issues but they did not help in my case.

Also: Iterating over the model states and printing their dtypes confirms that the conversion from float32 to float16 was successful.