[Solved] Result do not match with two different equivalent approaches

Today I just do some simple calculation like

    total_node = 0
    correct_node = 0
    correct_node2 = 0 

    for  ....:
          correct_node += torch.sum(target_sequence == pred)
          correct_node2 += torch.sum(target_sequence == pred).data[0]

but I found that correct_node != correct_node2.

How could this happen?

One you add a tensor the other a variable ?

correct_node is a python int, pred and target_sequence are Variables.

The result of correct_node is wrong, while the correct_node2 is correct.

You mean that the “+=” operator do not work properly for int and a variable?

try correct node = correct note + sum(…)

I had a problem a while ago using +=

might be a solution

The problem remains.

It should not be the problem of operator “+=” since there are many codes that compute the total loss by “+=” operator.

OK, I got it.

torch.sum(target_sequence == pred) returns a ByteTensor, so the correct_node is a ByteTensor.

correct_node just overflows.

hannn ok :wink: nice , good to hear