AttributeError: 'list' object has no attribute 'size' when using pretrained densenet model (pytorch densenet161)

I have been trying to build a model using models.densenet161 but I’m having trouble training it.

# %%capture
if not debug:
    model = models.densenet161(pretrained=True)
    # Freeze all layers
    for param in model.parameters():
        param.requires_grad = False

    model.classifier = nn.Sequential(nn.Linear(2208, 256),
                                     nn.ReLU(),
                                     nn.Dropout(0.2),
                                     nn.Linear(256, len(trainloader.dataset.classes)),
                                     nn.LogSoftmax(dim=1))

    criterion = nn.NLLLoss()
    optimizer = optim.Adam(model.classifier.parameters(), lr=learning_rate)
    model.to(device);

When I try to print the summary for this model using
torchsummary.summary(model, (3, 224, 224))
I get the error
AttributeError: 'list' object has no attribute 'size'

Similarly, the same error is thrown during training at this line
logps = model.forward(inputs)

This was working completely fine with pre-trained resnet50 and I saw in other topics that to modify the network it is necessary to use classifier instead of fc. I would appreciate if someone could tell me why this error is happening.

FYI this is the internal code that breaks:

site-packages/torchsummary/torchsummary.py in hook(module, input, output)
     17             m_key = "%s-%i" % (class_name, module_idx + 1)
     18             summary[m_key] = OrderedDict()
---> 19             summary[m_key]["input_shape"] = list(input[0].size())
     20             summary[m_key]["input_shape"][0] = batch_size
     21             if isinstance(output, (list, tuple)):```

Edit 1: changed 1024 to 2208, in the first linear layer.

When you substitute classifier with your own, you need to check the original model classifier input dimensions. Apparently, the original model classifier linear layer input dimension is 2208. So, I can do forward without error with this code

model.classifier = nn.Sequential(nn.Linear(2208, 256),
                                     nn.ReLU(),
                                     nn.Dropout(0.2),
                                     nn.Linear(256, 10),
                                     nn.LogSoftmax(dim=1))

You can check the model by simply print it like print(model)

Ah, of course, I tried this as well but replaced it again when trying with other Densenet variants. Unfortunately, I just forgot for the example, I just double-checked and it does not fix the original issue.

Ok, then you have to check your input, and make sure it is torch.tensor not a list or something else.
Because this code works without errors for me

model = models.densenet161(pretrained=False)
for param in model.parameters():
        param.requires_grad = False

model.classifier = nn.Sequential(nn.Linear(2208, 256),
                                    nn.ReLU(),
                                    nn.Dropout(0.2),
                                    nn.Linear(256, 10),
                                    nn.LogSoftmax(dim=1))
x = torch.rand(4, 3, 224, 224)
out = model.forward(x)

To see it is work you should reinitialize you model to remove hooks applied by torchsummury function.

Also, I would like to mention, through quick check it seems torchsummary doesn’t work with densnet,
but works well with resnet for instance.