TorchSummary Summary not working on mixed input architectures

class NeuralNetwork(nn.Module):
    def __init__(self):
        self.encoder = nn.Sequential(
            nn.Conv2d(1, 16, kernel_size=3),
            nn.Conv2d(16, 32, kernel_size=3),
            nn.Conv2d(32, 2, kernel_size=3),
            nn.MaxPool2d(2, stride=2),
            nn.Linear(26, 10)

        self.fcn = nn.Sequential(
            nn.Linear(256, 96),
            nn.Linear(96, 48),
            nn.Linear(48, 24)

        self.combined = nn.Sequential(
            nn.Linear(34, 20),
            nn.Linear(20, 15),

    def forward(self, image1, data):
        x_image = self.encoder(image1)
        x_fcn = self.fcn(data)
        x_multi = , x_fcn), dim=1)
        return self.combined(x_multi)

I have an architecture that looks like this and I am trying to view the summary (and get out the number of parameters) from this model, however when I run summary(model, [(1, 32, 8), (256,)], device='cpu') I get out a result showing the Layer(type) output shape etc… and just after this when it’s meant to print the number of parameters, it prints can't multiply sequence by non-int of type 'tuple'. I can include the entire stack trace if that helps. I’m not sure how to fix this problem.

Use torchinfo as torchsummary is deprecated and didn’t receive updates in a few years.

Hi, thanks for your response. I installed torchinfo-1.5.4, I’m using python 3.6, and tried to run summary(model, [(1, 32, 8), (1, 256)], device='cpu') using torchinfo’s summary and I get a runtime error saying Failed to run torchinfo. See above stack traces for more details. Executed layers up to: []. I 'm not sure what to do.