Is the following expected behaviour? (pytorch 1.4)
a = torch.tensor([[[1,2],[3,4]]]) # a is a 2*2 tensor with values 1,2,3,4
a = a.t()
print(a) # returns [[[1,3],[3,4]]]
Numpy returns [[[1,3],[2,4]]] in a similar scenario.
This kind of assignment is not very pytorch-like (not backprop’able I guess), but still I’m curious about what’s happening under the hood now. It’s as if the LHS assignment and RHS evaluation is happening one element at a time.
This is a minimal example, as
b = torch.tensor([[1,2],[3,4]]) # 2d tensor
b = b.t() # returns [[1,3],[2,4]]
matches intuition (or doing a.clone().t()).