Equivalence of slicing and assign in Aten C++

If I dont something like this:

auto Xbt = torch::zeros({ batch_size, 10, 5, (3 + p) });
Xbt.narrow(0, 0, 1) = Xm.repeat({ 10, 1 }).view({ 10, 5, (3 + p) });

Will be the same as ‘like’:

Xbt[0] = Xm.repeat({ 10, 1 }).view({ 10, 5, (3 + p) });

I just wanted to do something like this from Python:

Xbt[0, :, : , : ] = Xm.repeat(10, 1).view(10, 5, (3 + p))

If you can use narrow/slice/slect for getting the subtensor, using copy_ for assignment works. For advanced indexing assignments use index_put_.

Best regards

Thomas

@tom thanks for the answer! But why do I need to use copy_? After your suggestions I saw something like this here in stackoverflow.

Xbt.slice(0, 0, 1) = Xm.repeat({ 10, 1 }).view({ 10, 5, (3 + p) });

Would this not work?

Also found my way through discuss search and the golden tip about translate python to C++. And discovered that this will work:

Xbt.select(0, 0).copy_(Xm.repeat(10, 1).view(10, 5, (3+p)))

Golden tip about translate from python to C++

  • Translate your python script using C++ like functions like put_, narrow testing it (to making sure it works) than just go to C++ replicating everything.

I must admit that I’m not using stack overflow as a source of PyTorch advice much but would venture that the average person answering here (heavily skewed towards ptrblck and alband) is more involved in PyTorch than the average stackoverflower - point in case: as noticed by in the comments to that answer, the code piece does have at least one important typo (Tensor vs. tensor) leading to non-working code.
Back in the day when I re-implemented torch::bilinear in C++, one could not assign to the result of a function call but had to assign it to a temporary variable first. That is probably more than 2 years now, so it might have changed, but this is why I would use copy. In Python it’s impossible to assign to a function call result, too, so copy_ does work better with the golden tip, too.

Best regards

Thomas

2 Likes

Starting from the current nightly build (and PyTorch 1.5 soon), for

Xbt[0, :, : , : ] = Xm.repeat(10, 1).view(10, 5, (3 + p))

we can write

using namespace torch::indexing;
Xbt.index_put_({ 0, Slice(), Slice(), Slice() }, Xm.repeat({ 10, 1 }).view({ 10, 5, (3 + p) }));

Here is the general translation for Tensor::index and Tensor::index_put_ functions:

Python             C++ (assuming `using namespace at::indexing`)
-------------------------------------------------------------------
0                  0
None               None
...                "..." or Ellipsis
:                  Slice()
start:stop:step    Slice(start, stop, step)
True / False       true / false
[[1, 2]]           torch::tensor({{1, 2}})
2 Likes