Calling backward() with indexing in the function

Still very new to PyTorch, but loving the style.

I am stuck on a small problem where I cannot get the gradient or call backward() when using masked_select(). I am willing to use index_select() if I can figure out how to get the index. I feel close with nonzero() but can’t quite get it to work.

This works, when I build the index by hand:

import torch from torch.autograd import Variable import numpy as np x = np.array([ 0.00834103, 0.00212306, -999.0, 0.00149333, 0.00899409]) x = Variable(torch.from_numpy(x.astype('float32')),requires_grad=True) cond = Variable(torch.from_numpy(np.array([0,1,3,4]).astype('int32'))) y = x.index_select(0,cond.long()) out = y.sum() out.backward() print x.grad

When I try to build the condition dynamically and use masked_select, it fails with a NotImplementedError:

cond = (x>-999.) y = x.masked_select(cond)

So I figured I would get the index and then send that to index_select() but I get a TypeError here:

cond_idx = torch.nonzero(cond) *** TypeError: Type Variable doesn't implement stateless method nonzero

Any ideas how to get this to work?

Does this work: y = x[x>-999.] ?

I had started there but found some forum threads saying that Numpy style indexing is not supported. I just tried again and I get the *** NotImplementedError:

After much searching, it appears that this discussion from @apaszke is relevant with the .unsqueeze(1) being critical to it working.

Full example:

import torch from torch.autograd import Variable import numpy as np x = np.array([ 0.00834103, 0.00212306, -999.0, 0.00149333, 0.00899409]) x = Variable(torch.from_numpy(x.astype('float32')),requires_grad=True) y = x[(x>-999.).unsqueeze(1)] out = y.sum() out.backward() print x.grad

see this answer in this post