FloatTensor precision is not accurate

torch.set_printoptions(precision=20)
print(torch.__version__)
s = torch.FloatTensor([0.050, 0.15, 0.25, 0.35, 0.45, 0.55, 0.65, 0.75, 0.85, 0.95]).cuda()
print(s)

tensor([0.05000000074505805969, 0.15000000596046447754, 0.25000000000000000000,
0.34999999403953552246, 0.44999998807907104492, 0.55000001192092895508,
0.64999997615814208984, 0.75000000000000000000, 0.85000002384185791016,
0.94999998807907104492], device=‘cuda:0’)
1.0.1.post2

As you can see, values are not exactly accurate which affect my exceeding results based on these few fractions. why tensor values are exactly as they should be in their definition ?

Hi,

This is expected. float number are not precise beyond 6/7 digits. This is the floating point number specification, this is not a pytorch specific thing.
If you absolutely need more precision than that for your application, you will need to use doubles.