Inputdata image size is samller than model required

I used inception model, its input size is [299, 299, 3], but my image size is [300, 199]. When using transforms.Scale(299), how will it influence the result?

I generally use

from PIL import Image,ImageOps
img = Image.open("try.jpg") 
img = ImageOps.fit(img,(299,299),Image.ANTIALIAS)