I have a dataset with images sized 650x1250, and I want to downsample them for use with a deep learning model. The images contain very small objects, and resizing them to 320x320 has resulted in the model not learning these small features effectively.
Can you suggest any other methods to achieve a 320x320 image size while preserving small details?
@ptrblck I couldn’t understand how random cropping will work because for UNet image dimension should be divisible by 32.
I am using the original image with dimension 650x1250 and used random cropping transformation with probability 0.5 but the model is giving an error.