Inconsistent results of torch.argmax with tensors that have duplicated values

I was trying to use torch.argmax function with a tensor that has duplicated values in it.
I found inconsistency in the results. This was done on a machine with pytorch 1.0.0 / CUDA 9.0

cpu = torch.LongTensor([[2,3,0,3,3], [1,3,1,3,1]])
cpu.argmax(dim=1) # tensor([4, 3])
cuda = torch.cuda.LongTensor([[2,3,0,3,3], [1,3,1,3,1]])
cuda.argmax(dim=1) #tensor([4, 1], device='cuda:0')

When I did the same on another machine(pytorch 0.4.1, CUDA 9.0)

cpu = torch.LongTensor([[2,3,0,3,0], [1,3,1,3,1]])
cpu.argmax(dim=1) # tensor([4, 1]). 
cuda = torch.cuda.LongTensor([[2,3,0,3,0], [1,3,1,3,1]])
cuda.argmax(dim=1) # tensor([1, 1, device='cuda:0')

Having tried the same operation with many different tensors, it seems to me that the inconsistency happens without any regularity(the results are consistent for most of the tensors). What can be the reason for this? Isn’t this to be fixed?

1 Like