# Pytorch indexing

So given a torch tensor of a given shape say (`[1, 1, 10, 11, 12]`) and another torch tensor created as:

``````rng = torch.tensor([9, 10, 11])
``````

Now, I want to use this range over the last 3 dimensions. So something which statically would be equivalant to:

``````x = torch.rand(1, 1, 10, 11, 12)
rng = torch.tensor([9, 10, 11])

# statically want something like:

x[:, :, :rng[0], :rng[1], :rng[2]]
``````

However, the dimension of the input tensor is dynamic and I wonder if there is a way to do this kind of indexing dynamically using torch? Otherwise, I have to basically hard code the input dimensions which is ugly and difficult to maintain.

You could use `x[..., :rng[0], :rng[1], :rng[2]]` to index the last dimensions.
Let me know, if that would work for you.

The problem is `rng` can be dynamic, so it can be 1D, 2D, 3D or 4D. If I do this, I have to hard code the dimensions. i.e. something like:

``````if len(rng) == 1:
....
elif len(rng) ==  2:
...
``````

Is there a way to do this indexing directly with the tensor object?

I misunderstood this part:

as `x` would be the input tensor.

Note that `rng` is a 1-dimensional tensor with 3 values.
If you want to index the last 3 values, you could use negative indices:

``````x[:, :, :rng[-3], :rng[-2], :rng[-1]]
``````