Strange output of torch.max()

In my code below:
output_var.is_cuda is True.

_, max_probs = torch.max(output_var, 2)
print output_var.size()
print max_probs.size()
print torch.max(max_probs)

the outputs:

(10L, 26L. 37L)
(10L. 26L, 1L)
37

So size of output_var is (10L, 26L. 37L) and with

_, max_probs = torch.max(output_var, 2)

the value of max_probs should be from 0 to 36 according to my understanding. Is it correct?
In my code sometimes torch.max(max_probs) gets 37. It happens randomly.

It is a bug of pytorch?

Hi Melody,

I’ve tried to reproduce this issue, but I am not able to.
Here is the snippet of code I am using:

import torch
from torch.autograd import Variable

output_var = Variable(torch.randn(10, 26, 37).cuda())

_, max_probs = torch.max(output_var, 2)
print output_var.size()
print max_probs.size()

for i in range(1000):
    assert torch.max(max_probs).data[0] == 36

Does this snippet fail on your computer?
If so, what is:
print(pytorch.__version__)
and also what is the output of:

nvidia-smi