# Sum of tensor different from sum of indexed tensor

The following snippet outputs either 0. or values close to -1e-3:

``````for _ in range(10):
a = torch.empty((3,))
a.uniform_(1000,10000)
print(a.sum() - a[[2, 1, 0]].sum())
``````

torch.__version__ is 1.0.1.post2

Is there any reason why the snippet above should output values different from 0? I also experimented with cuda and I get similar values.

My output is below. Maybe its because of the float32 representation.

``````tensor([9392.1943, 9014.6592, 1103.7108])
tensor(0.)
tensor([5654.9507, 2363.8105, 7604.2734])
tensor(0.)
tensor([3609.6643, 2368.4341, 1899.2728])
tensor(0.0005)
tensor([2087.3135, 2011.1553, 6333.9561])
tensor(0.)
tensor([2378.4221, 3015.3701, 8273.2061])
tensor(0.)
tensor([9901.0283, 9634.8008, 9160.9082])
tensor(-0.0020)
tensor([6951.8438, 6391.8174, 8297.8105])
tensor(0.)
tensor([5154.0435, 4652.0537, 9261.4443])
tensor(0.0020)
tensor([4264.4990, 8310.5381, 8930.8291])
tensor(0.)
tensor([9919.5156, 5053.4619, 3984.4778])
tensor(0.)

``````

This is also the case for other initializations, just rand in the following case. The problem scales with the size of the tensor:

``````for i in range(10):
a = torch.rand(3, 10000)
print(a.sum() - a[[2, 1, 0]].sum())
``````

outputs:

``````tensor(0.0020)
tensor(0.0029)
tensor(-0.0020)
tensor(-0.0010)
tensor(-0.0039)
tensor(0.0049)
tensor(0.0039)
tensor(0.0020)
tensor(-0.0039)
tensor(-0.0020)
``````

I was also able to reproduce this in pytorch 0.4.1.