How can I effectively take slices of a tensor?

I’ve noticed that if I take slices of a big tensor with the standard numpy syntax (a = b[0:2]) and then save it to a file with pickle, the size is for the two tensors is the same.
What is actually happening?
I found that I can make the change of size effective by converting the newly created tensor to a numpy vector and then back to a tensor again but I’m sure there’s an easier way.

a and b will share the same memory, but some attributes in a such as the size and possibly storage_offset will be changed. Here is a small example

b = torch.zeros(3, 2)
a = b[1:3]
print(b)
> tensor([[0., 0.],
        [0., 0.],
        [0., 0.]])
a[:] = 1.
print(b)
> tensor([[0., 0.],
        [1., 1.],
        [1., 1.]])
print(a.storage_offset())
> 2

Note that I used other indices for slicing to demonstrate the storage_offset.
If you want to create a copy, use a = b[0:2].clone()

1 Like

Thank you, now it’s clear!

1 Like