Using negative indexes on a Tensor along anything but the first dimension seems to circularly shift the entries of the slice by one.

For example:

```
import torch
import numpy as np
A = np.arange(15).reshape(3,5)
B = torch.Tensor(A)
idx = [-1,0,1]
```

Then taking slices along the first dimension gives the same thing as numpy

```
A[idx,:]
Out:
array([[10, 11, 12, 13, 14],
[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9]])
B[idx,:]
Out:
tensor([[ 10., 11., 12., 13., 14.],
[ 0., 1., 2., 3., 4.],
[ 5., 6., 7., 8., 9.]])
```

but if you take slices along the next dimension the slice with the negative index gets circularly shifted by one entry

```
A[:,idx]
Out:
array([[ 4, 0, 1],
[ 9, 5, 6],
[14, 10, 11]])
B[:,idx]
Out:
tensor([[ 14., 0., 1.],
[ 4., 5., 6.],
[ 9., 10., 11.]])
```

Is this intentional? I couldnâ€™t find much documentation of Tensor indexing and the 60-minute blitz claims things should work the same as numpy.