How to index a two-dimension tensor with two single-dimension tensors

I want to index a two-dimension tensor like I do in numpy.

a = torch.rand(5,5)
b = b = torch.LongTensor([4,3,2,1,0])

But typing a[b,b] gives an error:

TypeError: indexing a tensor with an object of type LongTensor. The only supported types are integers, slices, numpy scalars and torch.LongTensor or torch.ByteTensor as the only argument.

How do I index a two-dimension tensor using other two tensors?

2 Likes

doing this is not supported yet.
For now you have to do:

a[b, :][:, b]

We plan to tackle this soon.

1 Like

Thanks for your reply. But this still gives the same error:

>>> a[b,:][:,b]
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: indexing a tensor with an object of type LongTensor. The only supported types are integers, slices, numpy scalars and torch.LongTensor or torch.ByteTensor as the only argument.

My pytorch vision is:

$ conda list | grep torch  
pytorch                   0.1.10               py35_1cu80  [cuda80]  soumith
torchvision               0.1.6                   py35_19    soumith

The answer you gave is not correct .
a = torch.rand(5,5)
b = b = torch.LongTensor([4,3,2,1,0])
a[b, :][:, b]
Traceback (most recent call last):
File “”, line 1, in
TypeError: indexing a tensor with an object of type LongTensor. The only supported types are integers, slices, numpy scalars and torch.LongTensor or torch.ByteTe

Any news here?
I have the same issue

x[b].transpose(1, 0)[b].transpose(1, 0)

should do the trick.

Strange that the error message to x[b, :] reads:

TypeError: indexing a tensor with an object of type torch.LongTensor. The only supported types are integers, slices, numpy scalars and torch.LongTensor or torch.ByteTensor as the only argument.

Which seems self-contradictory

Can also try adding something like:

_oldgetitem = torch.FloatTensor.__getitem__


def _getitem(self, slice_):
    if type(slice_) is tuple and torch.LongTensor in [type(x) for x in slice_]:
        i = [j for j, ix in enumerate(slice_)
             if type(ix) == torch.LongTensor][0]
        return self.transpose(0, i)[slice_[i]].transpose(i, 0)
    else:
        return _oldgetitem(self, slice_ )


torch.FloatTensor.__getitem__ = _getitem

Then x[b, :][:, b] works as expected. Maybe a bit of a hack though.

Regarding x[b, :][:, b] vs. x[b, b] – this is something which has never been solved for numpy arrays.

1 Like