Incorrect result of autograd on indexed vector


I did a small test of computing gradient and found the the result is incorrect.
Let’s say, I have x = [x0, x1, x2, x3] and index it by [0, 0, 3] to get y = [x0, x0, x3]. Then, I sum it up as z = y0 + y1 + y2 = 2 * x0 + x3. So the gradient of z with respect to x, should be [2, 0, 0 1]. However, the result of autograd is [1,0,0,1].

The script is as follows:

#!/usr/bin/env python

import torch
from torch.autograd import Variable

x = Variable(torch.rand(4), requires_grad=True)
print 'x = [%s]' % ', '.join(['%g' % i for i in])

y = x[torch.Tensor([0,0,3]).long()]
print 'y = [x[0], x[0], x[3]] = [%s]' % ', '.join(['%g' % i for i in])

z = torch.sum(y)
print 'z = sum(y) = 2 * x[0] + x[3] = %g' %[0]

print 'grad_x(z) = [%s]' % ', '.join(['%g' % i for i in])


I get the correct last output grad_x(z) = [2, 0, 0, 1].
I tested both the versions I had installed (~git checkout 2017-06-2x) and installed a fresh checkout of the master git (torch.__version__ is 0.1.12+4d5d9de from today, 2017-07-19).
I am on python3.5, though.

What is your torch.__version__? Maybe you had an unlucky checkout…

Best regards


Hi Thomas,

Thanks a lot for testing it. This is getting interesting that we got different results.

The software I am using are:
pytorch: 0.1.12_2, install from pip yesterday
python: 2.7
uname: Linux 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u1 (2017-06-18) x86_64 GNU/Linux