Use Python-like slice indexing across a given dimension

Is it possible to dynamically apply Python-style indexing slice(start, stop, step) to a given dimension? narrow doesn’t help here, since it doesn’t support step. as_strided would likely help. Is there a more idiomatic way?

Hi,

I don’t think we have a function to do that. But you can use indexing like a[:, 0:4:2] to achieve this. Or give the slice object directly inside the [].
Will that work for you?

I’d like to support a dynamic dimension. My current workaround is:

def tensor_slice(tensor, dim, s):
    if dim < 0:
        dim += tensor.dim()
    return tensor[(slice(None),) * dim + (s, )]

I do it to implement packbits: https://github.com/pytorch/pytorch/issues/32867

A question: if I do as_strided instead, can I control the shape of the result? Basically I need to select every 8th bool element, then every 8th+1, … 8th+7 in order to pack them into one uint8 array.

I guess adding a step parameter to narrow would fix my usecase

You can do everything with as_strided :stuck_out_tongue: But it might not be easy to read.

I guess adding a step parameter to narrow would fix my usecase

I do think that this is the way to go yes.
With the potential that it fails if the underlying Tensor cannot be viewed this way.

You can check out my implementation of packbits: https://github.com/pytorch/pytorch/issues/32867#issuecomment-660230647

Seems to work. Of course it’d be faster to implement this natively in C++

I don’t, instead I think that investigating why ATen’s slice isn’t visible in Python is probably the way to go…

Should I file a GitHub issue about what @tom suggests?

You might want to read about the background. Personally I don’t think we would do the same these days with JIT .code giving slice, but what do I know. Here and in the PR that is linked is the thing:

In the meantime, you can just call the JIT op (torch.ops.aten.slice(tensor, dim, start, stop, step))

Best regards

Thomas

1 Like