# Difference between ":" and "..." in tensor slicing

I am new to PyTorch. Can someone explain to me the difference between “:” and “…” in tensor slicing.
for instance

``````tensor = torch.rand(3,4)
print(f"Last column: {tensor[:,-1]}")
print(f"Last column: {tensor[...,-1]}")
``````

outputs the same results

It seems that when using `:` and `...` in this context they yield the same result. However, it seems when using `:`, and `...` in terms of adding a new dim to a Tensor they give different results.

``````>>> M=torch.randn(3,4)
>>> M
tensor([[-1.1915,  1.3102,  2.4116, -0.6022],
[-1.4025, -0.0796, -0.6342, -1.5389],
[-0.3399, -0.1049,  0.0192,  0.0186]])

#grab last column
>>> M[..., -1]
tensor([-0.6022, -1.5389,  0.0186])
>>> M[:, -1]
tensor([-0.6022, -1.5389,  0.0186])

#shape torch.Size([3, 1, 4])
>>> M[:, None]
tensor([[[-1.1915,  1.3102,  2.4116, -0.6022]],

[[-1.4025, -0.0796, -0.6342, -1.5389]],

[[-0.3399, -0.1049,  0.0192,  0.0186]]])
#shape torch.Size([3, 4, 1])
>>> M[..., None]
tensor([[[-1.1915],
[ 1.3102],
[ 2.4116],
[-0.6022]],

[[-1.4025],
[-0.0796],
[-0.6342],
[-1.5389]],

[[-0.3399],
[-0.1049],
[ 0.0192],
[ 0.0186]]])
>>>
``````

So, in your cases they are the same? but in other cases they’ll give different behaviour!

It’s the same as numpy slicing, use colons ( : ) when you have multiple dimensions you need to slice differently, ie: tensor[:-1. 2:-1, :] and semicolons (…) when all following (or previous) dimensions should be kept the same. For example, a tensor of shape: (8, 3, 256) can be sliced as:

• tensor[0:1, …] or equivalently tensor[0:1, :, :]
• tensor[…, -1] or equivalently tensor [:, :, -1]
• tensor[0:1, 0, …] or tensor[0:1, 0, :]
and so on and so on