Does indexing a tensor return a copy of it?

Hi, it seems indexing a tensor with slice or int returns a view of that without copying its underlying storage but indexing with another tensor (a Bool or a Long one but not a 0-dim long tensor) or a list returns a copy of the tensor. Am I right? If yes, why is this the case?
This behavior seems not to be documented anywhere in the docs.

original = torch.arange(12).reshape(3, 4)

view = original[2, 1:3]
view = original[torch.tensor(2), 1:3]

copy = original[2, torch.tensor([1, 2])]
copy = original[2, torch.tensor([False, True, True, False])]
copy = original[2, [1, 2]]

Another question is that is that copy performed right away or upon tensor modification?
And if it is done right away, isn’t better to be done lazily to avoid unnecessary copy in the case user wants to just examine the data?

copy = original[2, torch.tensor([False, True, True, False])] # Is copy performed here?
copy[1] = 100 # or here?

Hi Sadra!


(You can use .storage().data_ptr() to see if the underlying data of
a tensor has been copied or not.)

With slices you can leave the original data in its original location in memory
and still access it (reasonably) efficiently with strides and offsets.

When you index into a tensor, you are plucking out values from unstructured
locations, so you would need a more complicated view / access approach,
and pytorch prefers the efficiency that making a copy provides for any future
manipulations of the tensor.

(Pytorch does not try to be clever enough to recognize whether the result
of indexing into a tensor could have been obtained by slicing.)

There is some documentation here and there. Quoting, for example, from
the Tensor Views documentation:


When accessing the contents of a tensor via indexing, PyTorch follows Numpy behaviors that basic indexing returns views, while advanced indexing returns a copy. Assignment via either basic or advanced indexing is in-place.

The copy is performed right away – but note the exception to this (mentioned
in the quoted documentation) when you are assigning to an indexed tensor.

Here are some illustrations of what is going on:

>>> import torch
>>> print (torch.__version__)
>>> original = torch.arange (12).reshape (3, 4)
>>> view = original[2, 1:3]
>>> copy = original[2, [1, 2]]
>>>   # original storage()
>>>       # same storage() so it's a view
>>>       # a copy of the original data in new storage() so it's a copy
>>> # note, assigning to an indexed tensor assigns in-place without a copy
>>> original[2, [1, 2]] = torch.tensor ([99, 666])
>>> original
tensor([[  0,   1,   2,   3],
        [  4,   5,   6,   7],
        [  8,  99, 666,  11]])
>>> # but indexing twice does cause a copy to be made
>>> original[2, torch.tensor([False, True, True, False])][1] = 100   # Is copy performed here? -- yes
>>> original
tensor([[  0,   1,   2,   3],
        [  4,   5,   6,   7],
        [  8,  99, 666,  11]])


K. Frank

1 Like

Many thanks. Just a little question. In an assignment to an advanced-indexed tensor, how does PyTorch know if the original[advanced_index] is lhs so avoid copying?

Hi Sadra!

I have no idea. I would be grateful if an expert could explain what
python / pytorch techniques are used to achieve this.

To my mind, this is conceptually similar to the distinction in C (and
descendant languages) between “l-values” that are (results of)
expressions that can be assigned to (“left-hand side of =”) and
“r-values” that are normal values (that you might assign to something).
But how python does it I don’t know.


K. Frank