My model takes a fixed-size input image (128x128). However, my dataset contains images with variable widths and heights. I want to resize them to a fixed size, e.g., 128 x 128 for my model. What is the best practice in this case?
Directly using transforms.Resize() in this case squashes the output. I want to maintain the aspect ratio as much as I can (I know there will be a tradeoff, but if not, then how can I fix this issue?
I don’t fully understand the question, but wouldn’t
transforms.Resize((128, 128)) just work?
Sorry for replying late.
The dataset contains images of different sizes, like 120x32, 189x78, 220x64, etc.
My model takes 128x128 images as input.
If I resize the image using
transforms.Resize((128, 128)), the resulting image is squeezed or stretched image additionally it does not keep the aspect ratio of the input image.
So what is the best solution in this case? Should I pad the images like the jittering technique for this particular scenario?
Thanks for the explanation. I thought the issue was a functional one, i.e. you were stuck at the actual resizing operation. I dont know which approach (resizing, padding, resizing and cropping, etc.) would work best for your use case so you would need to compare these approaches.
Alright, thank you. So, in general, there is no standard approach for this case?