Pytorch rounding


I’ve noticed that when using torch.round() the values are rounded to the nearest even number as shown in this example:
>>> torch.Tensor([1.5, 2.5, 3.5, 4.5]).round()
tensor([2., 2., 4., 4.])

But when moving to CUDA tensors, it behaves differently and rounds away from zero:
>>> torch.Tensor([1.5, 2.5, 3.5, 4.5]).cuda().round()
tensor([2., 3., 4., 5.], device=‘cuda:0’)

Is this normal to have the same method functioning differently on CPU/GPU or should I report this somewhere? Is there a way to control the rounding method (nearest even number/away from zero) without having to implement one myself?



I think we may want to get the cpu to have the same behavior as the gpu.
@smth should an issue be opened for this?

I’ve opened an issue.

Thanks for opening the issue.

I don’t think that PyTorch round to nearest even numbers.

Instead, I would venture that this is likely an issue of floating point numbers and representations being only exact to machine precision:

>>> torch.tensor(0.50000001, dtype=torch.float).round()
>>> torch.tensor(0.5000001, dtype=torch.float).round()
>>> torch.tensor(0.50000001, dtype=torch.double).round()
tensor(1., dtype=torch.float64)

Best regards