Removing 2 layers from inception pretrained model is giving errors

On running the code snippet below I get the following error: RuntimeError: Expected 4-dimensional input for 4-dimensional weight [192, 768, 1, 1], but got 2-dimensional input of size [2, 1000] instead

import torch
import torchvision.models as models
from torchsummary import summary
inception = models.inception_v3(pretrained=True)
remove = list(inception.children())[:-2]
model= torch.nn.Sequential(*remove)
summary(model,(3,299,299))

I then tried to check if inception’s summary was working, and it was.

import torch
import torchvision.models as models
from torchsummary import summary
inception = models.inception_v3(pretrained=True)
summary(inception ,(3,299,299))

In the end , instead of [:-2], I allowed all layers to be present in the variable “remove”. This again gave me the same error : RuntimeError: Expected 4-dimensional input for 4-dimensional weight [192, 768, 1, 1], but got 2-dimensional input of size [2, 1000] instead

import torch
import torchvision.models as models
from torchsummary import summary
inception = models.inception_v3(pretrained=True)
remove = list(inception.children())[:]
model= torch.nn.Sequential(*remove)
summary(model,(3,299,299))

Is this a bug , or am I missing something here ? I tried this for Resnet and it worked perfectly

Wrapping all modules in an nn.Sequential container might work for a simple model definition.
In your case the inception model is failing, since inception.children() will return the child modules in the order they were initialized. model[15] would thus contain the InceptionAux module (which is used in this side branch of the model) and will thus apply a linear layer to your activations.
The next layer will fail because of the shape mismatch.

If you want to change the model execution I would recommend to write a custom model by deriving from the inception model and by changing the forward definition.