Error in training inception-v3

I have a model that was written using models from torchvision and I wanna test the performance with inception-v3. However, with the same model structure and imput images (size 224 x 224), I got the following error.

RuntimeError: Calculated padded input size per channel: (3 x 3). Kernel size: (5 x 5). Kernel size can't be greater than actual input size at /pytorch/aten/src/THNN/generic/SpatialConvolutionMM.c:50

Any thoughts on how to fix this? Thanks!

1 Like

Inception-v3 needs an input shape of [batch_size, 3, 299, 299] instead of [..., 224, 224].
You could up-/resample your images to the needed size and try it again.

7 Likes

Thanks! Any idea on why we designed Inception-v3 with 300 x 300 images while others normally with 224 x 224?

I assume Szegedy et al. got better results increasing the resolution for the variants of the Inception model.
As far as I remember they used 224 in their first version and switched to 299 in their Rethinking the Inception Architecture paper.

2 Likes

Hi ,
I have got another error saying
AttributeError: 'InceptionOutputs' object has no attribute 'log_softmax'
In incpetion_v3 pretrained model. Help!! :

InceptionOutputs contains the .logits and aux_logits attribute, so you would need to index one of these tensors to call .log_softmax on it.

1 Like

How , I have tried to make it false

for name, param in model_transfer.named_parameters():

  if(name != 'fc.weight'):

    param.requires_grad = False

  param.aux_logits = False

  Param.logits = False

return same error, even used Nll loss got error
AttributeError: 'InceptionOutputs' object has no attribute 'dim'

How can i make would index one of these tensors to call

.logits is an attribute of the output:

out = model(torch.randn(2, 3, 299, 299))
out.logits.log_softmax(1)

If you don’t want to use the aux_logits, then set aux_logits=False in the instantiation of the model.

3 Likes

Hi ,
I have got another error saying
AttributeError: 'InceptionOutputs' object has no attribute 'size'
In incpetion_v3 pretrained model. I am using focal loss which works well on other models. focal loss code is here

`def focal_loss(targets,logits,eps,l):
`    ce_loss = torch.nn.functional.binary_cross_entropy_with_logits(logits, targets, reduction= 'none')
`    pt = torch.exp(-ce_loss)
`    loss = (eps * (1-pt)**l * ce_loss).mean()
`    return loss

Please need help. It throws error on ce_loss statement

InceptionOutputs is a namedtuple, which contains the attributes .logits and .aux_logits, so you would probably want to pass output.logits to your loss function.

2 Likes
x = x.view(batch_size,timesteps,-1)

AttributeError: ‘InceptionOutputs’ object has no attribute ‘view’
I need Help

Does my previous message address this issue?
As you can see, IndeptionOutput contains attributes, which you would need to access first.

In the train and eval loop you could use something like this:

output = model(input)
    if isinstance(output, tuple): # <-- inception output is a tuple (x, aux)
        output = output[0]  # <-- use just x