Changing the backbone for FasterRCNN

I am new to pytorch and I wanted to try different backbones for FasterRCNN. I am following this tutorial:
This is my model

def my_model(num_classes):
    backboneV3= torchvision.models.inception_v3(pretrained=False)
    backboneV3_nofc  = nn.Sequential(*list(backboneV3.children())[:-1])
    backboneV3_nofc.out_channels = 192
    anchor_generator = AnchorGenerator(sizes=((32, 64, 128, 256, 512),),
                                       aspect_ratios=((0.5, 1.0, 2.0),))

    roi_pooler = torchvision.ops.MultiScaleRoIAlign(featmap_names=[0],
    modelV3 = FasterRCNN(backboneV3_nofc,
    return modelV3

I give the data during training exactly as in the tutorial but I always get this error:
RuntimeError: Expected 4-dimensional input for 4-dimensional weight 192 768, but got 2-dimensional input of size [2, 1000] instead
What should I change in the model?

You should ideally pass a tensor of shape B x C x H x W, B=batch_size, C=channel, H=height and W=width.
Assuming the exception above is occurring at the input layer, it looks like you are passing a tensor of shape 2 x 1000. Can you verify the shape of your input tensor?

So if I do:

backboneV3= torchvision.models.inception_v3(pretrained=False)
x = torch.rand(1,3, 299, 299)

this works fine. But if i try to slice the model in order to add the FasterRCNN it does not work anymore:

backboneV3_nofc  = nn.Sequential(*list(backboneV3.children())[:-1])

I feel a bit lost…how should I slice the model?

In the tutorial, the backbone is created using model.features.
This is unfortunately not possible in the current Inception implementation, since some functional calls are performed in the forward method as seen here.
These functional calls also make slicing the model and wrapping by nn.Sequential not possible.

A possible workaround would be to return the features, by replacing the last linear layer with nn.Identity:

model = models.inception_v3(aux_logits=False)
model.fc = nn.Identity()

This would not slice the model in any way, keep the forward definition, and just return the penultimate activations.

Let me know, if this would work.