How to do advanced indexing with LibTorch?

In Python, something like a[1:10:2, :, None, :] can be performed, is there any convenient method to do the same thing with LibTorch in one step?

Yes, I know I can do this by a.slice(0, 0, 10, 2).unsqueeze(-2), but is there any method to do this more elegantly, like in Python?

To be more accurate, I want these things,

  1. :
  2. None
  3. ...
  4. start:end:step
1 Like

Yes we are working on it currently, and this is one of the PRs: https://github.com/pytorch/pytorch/pull/30425.

Starting from the current nightly build (and PyTorch 1.5 soon), for

a[1:10:2, :, None, :]

we can write

using namespace torch::indexing;
a.index({Slice(1, 10, 2), Slice(), None, Slice()});

Here is the general translation for Tensor::index and Tensor::index_put_ functions:

Python             C++ (assuming `using namespace at::indexing`)
-------------------------------------------------------------------
0                  0
None               None
...                "..." or Ellipsis
:                  Slice()
start:stop:step    Slice(start, stop, step)
True / False       true / false
[[1, 2]]           torch::tensor({{1, 2}})
7 Likes