nn.Linear return different value when feeding same tensor

I run the following pytorch code on my machine, but the result is not expected. Why and how can I fix this problem?

>>>linear = nn.Linear(300, 300)
>>>input = torch.randn(10, 300, 300)
>>>out1 = linear(input)
>>>out2 = linear(input[0])
>>>out1[0].equal(out2)
False
>>>out2.equal(out1[0])
False
>>>(out1[0] - out2).pow(2).sum()
tensor(6.9483e-12, grad_fn=<SumBackward0>)

Thanks!

You are seeing the limited floating point precision using float32.
If you need more precision, you could cast all tensors and the parameters to double.

1 Like