Problem with image transformation

I have a problem transforming my images of dimensions of 320*160 I used this transformation and it trains but when trying to actually run the model it fails after the flattening layer which after testing a lot of solutions it came to my mind that the transformation is probably the problem specially that I don’t really know how to transform using pytorch correctly this my transformation
transform_car = transforms.Compose( [transforms.Resize([66, 200]), transforms.RandomCrop([62,194]), transforms.RandomHorizontalFlip(), transforms.ToTensor()])
is there a way to fix it? or like how should I resize,crop and randomly flip the images

The transformation looks alright and will create an output tensor in the shape [channels, 62, 194]. What’s the actual issue? Are you running into a shape mismatch? If so, check the flattened activation shape and make sure the next layer expects the same number of features (e.g. a linear layer).

It trains fine but when I try to run the model it says that mat 1 and mat2 shapes cannot be multiplied (64x429 and 1088x100)

What does “run” mean here? If you are properly able to train the model you are already executing the forward and backward pass.
Assuming the failure is raised during inference check the input shapes and make sure they are equal to the training case.

The model is plugged in a simulator from which I collected the data so when I try to run it on the sim it just gives this error

Did you make sure to use the same transformation or at least guarantee the same spatial size is used?

Nope I think that’s it, this is really is my first big deep learning project and my first pytorch one so I think I made alot of mistakes I will sure try that