I would like to slice a torch tensor in a pattern where the slice has a width that is not equal to one (what would basically be slicing off columns of a matrix) for a 1D tensor.

>>> a = torch.tensor([0.0, 0.0, 0.0, 0.0, 1.0, 1.0] * 3)
>>> b = a.view(3, -1)[:, 4:]
>>> b
tensor([[1., 1.],
[1., 1.],
[1., 1.]])
>>> b.view(-1, 6)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.

Given the rules of torch.Tensor.view this of course makes sense since this cant be expressed as a uniform slice, but I was wondering if there are any other ways to perform such an operation while not doing a copy of the original storage.

Making it contiguous solves that problem (but then you’re making a copy). My understanding was a bit wrong I guess then. Unfortunately do not have a solution then. Even .reshape is only for contiguous.