I’m new to torch, coming from TF/Keras, and I’m just trying to do some basic examples loading pre-trained models and generating some predictions for image classification problems.
To do this I’ve been using the transform.Resize() function.
For instance, I’m using an Inception model that expects a (299,299,3) shaped input.
I see a lot of places making it seem as if transform.Resize(299) and transform.Resize((299,299)) should give the same results, but that is not the case from my tests.
Can someone clarify if this is expected behavior? Some other sanity checks I ran were to resize to (299,300), which should generate an error, but the network still makes a prediction. How is this possible?
Thank you in advance for any help.