i have a dataset of large images (i.e. the parameters i pass to transforms.resize or resized crop are smaller than the image dimensions and it works fine. i added small images to the dataset (i.e. smaller than the parameters). how are they handled by the transformations? also, is it/what is the best way to handle them?
I would start with the TorchVision documentation of Resize. It explains
- If size is a sequence like (h, w), output size will be matched to this.
- If size is an int, smaller edge of the image will be matched to this number. i.e, if height width, then image will be rescaled to (size * height / width, size)
So then crop will do its thing on the (upscaled) image.