I have already asked a similar question, but I’ll try to reformulate it.
My question now is how to reorder PyTorch tensor data in Fortran-style order like np.reshape(order=‘F’) in numpy?
I need column-major ordering to use CUDA functions from cusparse library.
1 Like
rdroste
(Richard)
May 7, 2018, 6:42pm
2
How about this?
t = torch.Tensor(2,3)
f = t.numpy().reshape(2,3,order='F')
@rdroste
First of all, this function is not differentiable.
Secondly, it needs data to be transfered from CPU to GPU and back. This operation is very slow.
It’s a few more lines in PyTorch, but this will work if your input is C-style contiguous:
t = torch.randn(3, 5)
t = t.t().contiguous().t()
print(t.shape) # torch.Size([3, 5])
print(t.stride()) # (1, 3)
1 Like
rdroste
(Richard)
May 7, 2018, 7:59pm
5
@spaceinvader You are right, the numpy solution will be slow and non-differentiable.
@colesbury How about reversing the stride manually to perform less operations like this:
t = torch.randn(3, 5)
t.set_(t.storage(), t.storage_offset(), t.size(), tuple(reversed(t.stride())))
This should have the same effect as t = t.t().contiguous().t()
but is potentially faster.
How about this?
vp=torch.randn(1,204)
vp1=vp.view(68,-1)##torch .size([68,3])
vp2=vp1.t()##torch .size([3,68])
then vp2 is the F order version of vp