Advanced Slicing

I would like to slice a torch tensor in a pattern where the slice has a width that is not equal to one (what would basically be slicing off columns of a matrix) for a 1D tensor.

>>> a = torch.tensor([0.0, 0.0, 0.0, 0.0, 1.0, 1.0] * 3)
>>> b = a.view(3, -1)[:, 4:]
>>> b
tensor([[1., 1.],
        [1., 1.],
        [1., 1.]])
>>> b.view(-1, 6)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.

Given the rules of torch.Tensor.view this of course makes sense since this cant be expressed as a uniform slice, but I was wondering if there are any other ways to perform such an operation while not doing a copy of the original storage.

Maybe looking into this would be useful: PyTorch unfold could be faster · Issue #60466 · pytorch/pytorch · GitHub

import torch

a = torch.tensor([0.0, 0.0, 0.0, 0.0, 1.0, 1.0] * 3)
b = a.view(3, -1)[:, 4:]

# Using as_strided
result = torch.as_strided(b, size=(1, 6), stride=(2, 1))

print(result)

Is this what you were looking for? I maybe mistaken.

torch.as_strided actually performs the stride operation on the original memory layout, so the value that is output from this program is actually

tensor([[1., 1., 0., 0., 0., 0.]])

and not the vector of ones one might expect.

Ah, my bad.

Making it contiguous solves that problem (but then you’re making a copy). My understanding was a bit wrong I guess then. Unfortunately do not have a solution then. Even .reshape is only for contiguous.