Tensor operation bug?

Running the following code yields very strange results:

for i in range(100):
    a = torch.rand((189,4))
    b = torch.rand((189,4), dtype=torch.float64)
    print(torch.max(a[:,:2]+b[:,:2]))

Gives:

tensor(2.8119e+275, dtype=torch.float64)
tensor(nan, dtype=torch.float64)
tensor(nan, dtype=torch.float64)
tensor(2.8119e+275, dtype=torch.float64)
tensor(9.2142e+269, dtype=torch.float64)
tensor(9.2142e+269, dtype=torch.float64)
...

Note that for this to happen the tensors need to have different types and the operation has to involve a slice (simple a + b behaves normally)

Am I missing something? Thanks in advance.

Pytorch 1.3.0

Could you update to 1.3.1, as some indexing/assignment operations with type promotion were broken in 1.3.0?

1 Like