Precision problem while converting to FloatTensor

I am facing precision problem while converting from list to FloatTensor. Looked through the documentation but unable to find any parameter or variable that lets me set/fix the precision.

Example given below:

print(weight)
> [ 0.02551583  0.01355686 -0.04717987 ...  0.01422889 -0.01558862
   0.01234896]

print(torch.FloatTensor(weight))
> tensor([ 0.0255,  0.0136, -0.0472,  ...,  0.0142, -0.0156,  0.0123])

Hoping for a quick solve.

1 Like

I don’t think you can have float tensor to capture all of the precision points. You can use the set_printoptions to see why the FloatTensor is not accurate after some points. If you want more precision, use DoubleTensor:

import torch
weight = [1.0000600001, 4.0000900001]
torch.set_printoptions(precision=10)

print(torch.FloatTensor(weight))
print(torch.DoubleTensor(weight))
tensor([1.0000599623, 4.0000901222])
tensor([1.0000600001, 4.0000900001], dtype=torch.float64)

DoubleTensor promoted CudNN error. However, FloatTensor with torch.set_printoptions(precision=10) worked. But the question is, will it be true for all up and coming tensors? Then a lot of precision will be lost if I want all other tensors to be default and have exactly those precision which it obtains.

The set_printoptions only changes the printing interface and it has nothing to do with the precision. If you want to use float data type, you will have up to 7 precision accuracy. As far as I know, when it comes to parameter estimation, the float data type is more than enough. That’s probably why Geforce GPU is preferred over other types like Quadro with support more floating points.