# Define double tensor, but elements are all float

Hi all,

I want to define a double tensor and use its elements for later calculation, and I need them in doublefloat or float64.
But result turns out all elements are only float.
Did I do anything wrong ?

import torch
dtype = torch.DoubleTensor
A=torch.randn((2,2)).type(dtype)
type(A[0,0])

When you do `A[0,0]`, you get a python number.
Python numbers can only be `int` or `float`. These naming have nothing to do with precision and can represent respectively the `LongTensor` and `DoubleTensor` types.

So it’s just a name, and it’s still double in fact, and precision is still double ? How to check the real precision of python float ?

If you work with a `Tensor` of type `DoubleTensor`, all your operations will have a double precision.
If you perform operations that return a python number, then you get the precision of a python number: https://docs.python.org/2/library/stdtypes.html#typesnumeric (which is in general equivalent to a C double).

2 Likes

Thanks much, it really helps.
So I just keep working in Tensor to maintain double precision.

Given that the precision of the python number is as much as the most precise `Tensor` type, you can use python numbers as well.
But if you do so, you need to be careful when using the autograd as it will return a `Variable` containing a `Tensor` with one element (to be able to keep track of the history) instead of a python number. So always working with `Tensor`s is the best!

1 Like