Calculating error

When I wrote:

torch.round(torch.tensor(0.5))

I think it will be ‘1’. But I got ‘0’.
And when I wrote this:

torch.round(torch.tensor(1.5)

I got the correct value ‘2’.
What happened to ‘0.5’?

I think the torch.round operation sticks to numpy’s implementation, which explains:

For values exactly halfway between rounded decimal values, NumPy rounds to the nearest even value. Thus 1.5 and 2.5 round to 2.0, -0.5 and 0.5 round to 0.0, etc. Results may also be surprising due to the inexact representation of decimal fractions in the IEEE floating point standard [R9] and errors introduced when scaling by powers of ten.

I got it. Thank you very much :+1: