I have a model that was written using models from torchvision and I wanna test the performance with inception-v3. However, with the same model structure and imput images (size 224 x 224), I got the following error.
RuntimeError: Calculated padded input size per channel: (3 x 3). Kernel size: (5 x 5). Kernel size can't be greater than actual input size at /pytorch/aten/src/THNN/generic/SpatialConvolutionMM.c:50
Inception-v3 needs an input shape of [batch_size, 3, 299, 299] instead of [..., 224, 224].
You could up-/resample your images to the needed size and try it again.
I assume Szegedy et al. got better results increasing the resolution for the variants of the Inception model.
As far as I remember they used 224 in their first version and switched to 299 in their Rethinking the Inception Architecture paper.
Hi ,
I have got another error saying AttributeError: 'InceptionOutputs' object has no attribute 'log_softmax'
In incpetion_v3 pretrained model. Help!! :
Hi ,
I have got another error saying AttributeError: 'InceptionOutputs' object has no attribute 'size'
In incpetion_v3 pretrained model. I am using focal loss which works well on other models. focal loss code is here
InceptionOutputs is a namedtuple, which contains the attributes .logits and .aux_logits, so you would probably want to pass output.logits to your loss function.