Different sizes of train-test input - CNN

I want to use the “DeepLabV3 ResNet50” model on a medical image segmentation task. I trained the model with images of size 112x112. For testing when I use images of the same size the model works fine but when I resize the images to, let’s say 224x224, the output shows that no part of the input image corresponds to the desired segment. I don’t get any errors about the input size though. I have read that FCNNs are capable of accepting inputs of different sizes due to the lack of fully connected layers and i have checked that my model has no fully connected layers.

Do CNNs in general “learn” to identify the target objects only on the size they have been trained on?
Does it have to do with the size of the image, the scale of the target object inside the image or both?

Thank you.

Yes if you train a CNN on a certain size image the weights of the model will correspond only to that size image. So if you change the shape in testing the model will not be able to predict it correctly because the weights will not match up with the size of the image but you won’t get an error.

1 Like