Index tensor with the way that numpy provides

input = Variable(torch.LongTensor([-1, -1, 0, 1]))
input2 = Variable(torch.LongTensor([[0, 0], [1, 1], [2, 2], [3, 3]]))
temp = input < 1

print input2[temp, :]

I got

RuntimeError: inconsistent tensor size at /b/wheel/pytorch-src/torch/lib/TH/generic/THTensorMath.c:193

if input2 is 1-d, say

input[temp]

there is no problem. Is there a way to do what I want?

The current way of doing what you want is the following:

input = Variable(torch.LongTensor([-1, -1, 0, 1]))
input2 = Variable(torch.LongTensor([[0, 0], [1, 1], [2, 2], [3, 3]]))
temp = input < 1

indices = temp.data.nonzero()[:,0]

print(input2[indices])

Note that with broadcasting and advanced indexing, we will get even close to numpy indexing semantics

1 Like