Consider the following setup (we can suppose the matrix `a`

is a grayscale image):

```
In [3]: a = (255 * np.random.random([5, 5])).astype(np.uint8)
In [4]: b = torch.cuda.FloatTensor(a.astype(np.float32) / 255)
In [5]: c = torch.cuda.FloatTensor(a) / 255
In [6]: b - c
Out[6]:
1.00000e-08 *
-5.9605 -2.9802 -5.9605 0.0000 0.0000
-5.9605 -1.4901 0.0000 -5.9605 -5.9605
0.0000 -5.9605 -5.9605 0.0000 0.0000
0.0000 -1.4901 -1.4901 -5.9605 0.0000
0.0000 -2.9802 0.0000 0.0000 -5.9605
[torch.cuda.FloatTensor of size 5x5 (GPU 0)]
```

I know that the difference is due to the limited precision of 32-bit floating numbers. My question: Is there a sense in which `b`

is a more accurate result than `c`

? Computing `c`

is a little faster than computing `b`

when `a`

is large, so is there any advantage to preferring `b`

despite the speed difference?