Which method is best for Image Downsampling?

I have a dataset with images sized 650x1250, and I want to downsample them for use with a deep learning model. The images contain very small objects, and resizing them to 320x320 has resulted in the model not learning these small features effectively.

Can you suggest any other methods to achieve a 320x320 image size while preserving small details?

what do you suggest @ptrblck?

Would random crops work for your use case? This would keep the resolution while processing smaller parts of the model.

@ptrblck I couldn’t understand how random cropping will work because for UNet image dimension should be divisible by 32.
I am using the original image with dimension 650x1250 and used random cropping transformation with probability 0.5 but the model is giving an error.

I don’t know what kind of error you are getting, but the random crop size can be set to a multiple of 32.