Does my model sees a rotated image?

I have an input image with shape (1, 3, 392, 196) but after calling transforms.ToTensor() it converts it to (1, 3, 196, 392), by swapping the width and height. I want to know whether my model is seeing a rotated version of the input image because this is crucial for my object detection task.

Could you post a reproducible code snippet, please?

Thank you @ptrblck. I figured it out. I had a misconception regarding how the image will look after swapping H and W. Later I realized that it is correct, nothing changes about how the model sees the image. It is a thing about how arrays are represented. An image with height H and width W, when represented as an array will consist of H sub-arrays of width W: each sub-array is a row. So when Torch reports the dimension of the array it will count H sub-arrays of width W and that is why we have the dimension as [H,W].
Thank you once again for offering an assistance.