Indeed, it can be a shortcut to use
tensor.transpose_(0, 1)
instead of
tensor = tensor.transpose(0, 1)
But note that the difference in performance is not significant, as transpose
does not copy memory nor allocate new memory, and only swaps the strides.