Resize sanity check

I’m new to torch, coming from TF/Keras, and I’m just trying to do some basic examples loading pre-trained models and generating some predictions for image classification problems.

To do this I’ve been using the transform.Resize() function.
For instance, I’m using an Inception model that expects a (299,299,3) shaped input.
I see a lot of places making it seem as if transform.Resize(299) and transform.Resize((299,299)) should give the same results, but that is not the case from my tests.

Can someone clarify if this is expected behavior? Some other sanity checks I ran were to resize to (299,300), which should generate an error, but the network still makes a prediction. How is this possible?

Thank you in advance for any help.

Yeah, the StackOverflow explanation is wrong. In general, it seems that there aren’t many places to get better PyTorch advice than on these forums here.

The TorchVision documentation explains:

  • If size is a sequence like (h, w), output size will be matched to this.
  • If size is an int, smaller edge of the image will be matched to this number. i.e, if height > width, then image will be rescaled to (size * height / width, size)

The idea of the latter is to not ruin the aspect ratio of the image and follow by a cropping operation to the final size.

Best regards

Thomas

1 Like