How to get transpsoed data buffer in pytorch c++

I want get the data buffer atfer transposed a tensor, however, tensor would not really transpose the tensor(the data in buffer is still the original data sequence).

I use contiguous method to get the transposed data, for 2d tensor, it works well, but for higher dimension(e.g. 3d) tensor, this method not work. So how can i get the transposed data?

// e.g. b is a 3d tensor, how can i get the transposed data?
auto a = b.transpose(1, 2); 

transpose should work on the 3D tensor as seen here:

x = torch.randn(2, 2, 3)
x.transpose(1, 2)

Could you describe the issue a bit more, please?

Thanks for your feedback. Transpose would work,but I want to get the transposed data in C++. e.g. I have a tensor as below.

# python
x = torch.tensor([[[1, 2, 3], [4, 5, 6]]])
 
# in pytorch c++ side(I want to add a new op in c++), I want to 
# get the data after transpose. the data of x in memory should 
# be as: {1, 2, 3, 4, 5, 6}

auto x_t = x.transpose(1, 2);

# however after transpose the data of x_t is still {1, 2, 3, 4, 5, 6}
# anyway, it is the expected result for tensor mechanism.
x_t = x_t.contiguous();

# after contiguous, I would get transposed data as {1, 4, 2, 5, 3, 6} for 2D tensor,
# but it not works for 3D tensor, how should I do?

I’m still unsure what’s failing, as calling contiguous() on a 3D tensor should also work as seen here:

x = torch.arange(2*2*3).view(2, 2, 3)
print(x)
# tensor([[[ 0,  1,  2],
#          [ 3,  4,  5]],

#         [[ 6,  7,  8],
#          [ 9, 10, 11]]])
print(x.shape, x.stride())
# torch.Size([2, 2, 3]) (6, 3, 1)
print(x.is_contiguous())
# True

y = x.transpose(1, 2)
print(y)
# tensor([[[ 0,  3],
#          [ 1,  4],
#          [ 2,  5]],

#         [[ 6,  9],
#          [ 7, 10],
#          [ 8, 11]]])
print(y.shape, y.stride())
# torch.Size([2, 3, 2]) (6, 1, 3)
print(y.is_contiguous())
# False
y = y.contiguous()
print(y.shape, y.stride())
# torch.Size([2, 3, 2]) (6, 2, 1)
print(y.is_contiguous())
# True

The transpose will manipulate the meta data of the tensor as seen by the shape and stride and the contiguous() call will copy it into a memory-contiguous layout again.

Hi, ptrblck, thanks for your attention.
Yes,you are right. The issue is caused by no sync after contiguous as my data is on cuda. Thanks again.