I want to use the “DeepLabV3 ResNet50” model on a medical image segmentation task. I trained the model with images of size 112x112. For testing when I use images of the same size the model works fine but when I resize the images to, let’s say 224x224, the output shows that no part of the input image corresponds to the desired segment. I don’t get any errors about the input size though. I have read that FCNNs are capable of accepting inputs of different sizes due to the lack of fully connected layers and i have checked that my model has no fully connected layers.
Do CNNs in general “learn” to identify the target objects only on the size they have been trained on?
Does it have to do with the size of the image, the scale of the target object inside the image or both?