FloatTensor and DoubleTensor

I have encountered many times the error about the incompatibility of DoubleTensor and FloatTensor. I am wondering why does PyTorch distinguish these two types of Tensors?

3 Likes

DoubleTensor is 64-bit floating point and FloatTensor is 32-bit floating point number. So a FloatTensor uses half of the memory as a same sizeDoubleTensor uses. Also GPU and CPU can compute higher number operations if numbers have less precision. However DoubleTensor have higher precision, if thats what you need. So Pytorch leaves it to user to choose which one to use.
If you are asking why we can not use them together, like lists in the python, again it is because of performance. Also it would be so much harder to support GPU backend.
You can read about double vs float or

@enisberk clearly gives the distinction between the two. However is this question, why does Pytorch not perform automatic typecasting? To give more control. I suppose you know how to solve these errors, else you may check This page.

1 Like

While an automatic typecast might be convenient in some situations, it might introduce silent errors, e.g. in setups working with mixed precision.
In my personal opinion I like to get an error and fix a type cast manually, but maybe there are some use cases where it might make sense to cast automatically.

@ptrblck it should be just as cpp compiler, typecasts int floats and doubles.