Memory reallocation when creating new torch.Tensor object

AFAIK, torch.Tensor object is only a view over the corresponding torch.Storage object, which actually contains contiguous data. But why when we create a new tensor object from the existing one, by slicing it for example, another torch.Storage object is created? Does it mean that the memory, in this case, is allocated again?

In another word, why running the following snippet doesn’t result in AssertionError?

import torch

tensor_1 = torch.ones(5, 5)
id1 = id(tensor_1.storage())

tensor_2 = tensor[:3, :3]
id2 = id(tensor_2.storage())

assert id1 != id2

Hi,

It will depend on the functions. You can see in the doc if a function returns a view of the original tensor or not.

Also id() tells you if the python objects are the same, but this is not the right way to check this. You can check if the location in memory of the beginning of the storage is the same:
tensor_1.storage().data_ptr() == tensor_2.storage().data_ptr().

Note that advance indexing is a bit special in this regard as depending on the arguments, you will or will not get a view.

1 Like