# Torch.max returns wrong indices

Hi Team,
Below given is the XOR NN using PyTorch. Looks like the output giving by the max() is wrong. Please review my output as POC.

``````import torch as th

epochs = 2000
lr = 1
XOR_X = [[0, 0], [0, 1], [1, 0], [1, 1]]
XOR_Y = [[0, 1], [1, 0], [1, 0], [0, 1]]

def forward(x):
a2 = x.mm(w1)
# pytorch didn't have numpy like broadcasting when i wrote this script
# expand_as make the tensor as similar size as the other tensor
h2 = a2.sigmoid()
a3 = h2.mm(w2)
hyp = a3.sigmoid()
return hyp

for epoch in range(epochs):
hyp = forward(x_)
cost = y_ - hyp
cost = cost.pow(2).sum()
if epoch % 500 == 0:
print(cost.data)
cost.backward()

for x in XOR_X:
hyp = forward(Variable(th.FloatTensor([x])))
values, indices = hyp.max(0)
print('==========================\nX is: ', x)
print('==========================\n hyp is: ', hyp)
print('==========================\n indices from argmax: ', indices)
``````

# ========================== X is: [0, 0]

hyp is: Variable containing:
0.0166 0.9810
[torch.FloatTensor of size 1x2]

==========================
indices from argmax: Variable containing:
0 0
[torch.LongTensor of size 1x2]

1 Like

I found the same issue with getting the max indices - it would consistently return all zeros.

The issue is that your Tensor is of size (1x2), and you are taking the max over dimension 0 (which has only one element). Take the max over dimension 1 instead

2 Likes

I am so dump Thanks @fmassa