I’ve noticed that when using torch.round() the values are rounded to the nearest even number as shown in this example:
>>> torch.Tensor([1.5, 2.5, 3.5, 4.5]).round()
tensor([2., 2., 4., 4.])
But when moving to CUDA tensors, it behaves differently and rounds away from zero:
>>> torch.Tensor([1.5, 2.5, 3.5, 4.5]).cuda().round()
tensor([2., 3., 4., 5.], device=‘cuda:0’)
Is this normal to have the same method functioning differently on CPU/GPU or should I report this somewhere? Is there a way to control the rounding method (nearest even number/away from zero) without having to implement one myself?