Torchsummary return error

Hello, I am currently learning to modify architecture of a model on pytorch and encounter some problem when using pytorch summary. I always get output of my model’s architecture followed by typeerror


    Layer (type)               Output Shape         Param #

================================================================
Conv2d-1 [-1, 64, 64, 64] 3,136
Conv2d-2 [-1, 128, 32, 32] 131,200
BatchNorm2d-3 [-1, 128, 32, 32] 256
LeakyReLU-4 [-1, 128, 32, 32] 0
Conv2d-5 [-1, 256, 16, 16] 524,544
BatchNorm2d-6 [-1, 256, 16, 16] 512
LeakyReLU-7 [-1, 256, 16, 16] 0
Conv2d-8 [-1, 256, 8, 8] 1,048,832
BatchNorm2d-9 [-1, 256, 8, 8] 512
LeakyReLU-10 [-1, 256, 8, 8] 0
Conv2d-11 [-1, 256, 4, 4] 1,048,832
BatchNorm2d-12 [-1, 256, 4, 4] 512
LeakyReLU-13 [-1, 256, 4, 4] 0
Linear-14 [-1, 256] 1,048,832
Traceback (most recent call last):
File “main_gen_pseudo-data.py”, line 149, in
main()
File “main_gen_pseudo-data.py”, line 110, in main
summary(skipnet_model, (3, 128 , 128))
File “/home/cgal/anaconda3/envs/pytorch/lib/python3.7/site-packages/torchsummary/torchsummary.py”, line 93, in summary
total_output += np.prod(summary[layer][“output_shape”])
File “/home/cgal/anaconda3/envs/pytorch/lib/python3.7/site-packages/numpy/core/fromnumeric.py”, line 2772, in prod
initial=initial)
File “/home/cgal/anaconda3/envs/pytorch/lib/python3.7/site-packages/numpy/core/fromnumeric.py”, line 86, in _wrapreduction
return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
TypeError: can’t multiply sequence by non-int of type ‘list’

I have already checked that the input during training is (16,3,128,128) and the command I use for summary is summary(skipnet_model, (3, 128 , 128)). Can someone help me figure out this problem

This is the full architecture of my Skipnet link

1 Like

Hi, there, again. I am trying to reproduce the error. However, maybe there is missing get_shading function in the code, so it will be great to provide the missing code.

I have reproduce the error without the get_shading, and will let you know as soon as I solve this error~

I have edited the link with get_shading

I dive into the code, I find the error occurs in the SkipNet_Encoder. Due to the number of the output is 5 instead of 1. So the output_shape for the SkipNet_Encoder exist 5 list, which is not available to compute np.prod. This is exactly why summary throws out the error.

My debug information can be seen below:
ps: the breakpoint is set to the 93 line in the torchsummary.py.

'SkipNet_Encoder-15',
              OrderedDict([('input_shape', [-1, 3, 128, 128]),
                           ('output_shape',
                            [[-1, 256],
                             [-1, 64, 64, 64],
                             [-1, 128, 32, 32],
                             [-1, 256, 16, 16],
                             [-1, 256, 8, 8]]),
                           ('nb_params', 0)])),

Solution:
You can try to build encoder with only one output, not multiple outputs. Or you can use summary separately in the model.

Hoping my analysis make sense to you ~

1 Like

So what I understand is summary can only work if the model has only 1 list of output and since mine output is 5 since it follows the structure of U-Net it can’t use summary, is it like that? Can you show how to use summary separately? Thank you for your help so far

Firstly, I need to modify my answer, as far as I concerned that summary can not directly summary the model with 5 outputs in forward function. You can simply don’t use the default function forward for encoder. You can change the default forward to forward_any_name_you_want to compute the results, then call it in your SkipNet model. As far as I test, it good to work. My change is just below:

out, skip_1, skip_2, skip_3, skip_4 = self.encoder.forward_1(x)
# change from self.encoder(x)

The output is following:

You can see that the SkipNet_Encoder is disappear, however the layers in the encoder is still there. And we also have a layer named SkipNet_Decoder which have no params. So if you don’t want it, you can use a custom method function instead of using the forward methods.

torch.Size([2, 3, 128, 128])
----------------------------------------------------------------
        Layer (type)               Output Shape         Param #
================================================================
            Conv2d-1           [-1, 64, 64, 64]           3,136
            Conv2d-2          [-1, 128, 32, 32]         131,200
       BatchNorm2d-3          [-1, 128, 32, 32]             256
         LeakyReLU-4          [-1, 128, 32, 32]               0
            Conv2d-5          [-1, 256, 16, 16]         524,544
       BatchNorm2d-6          [-1, 256, 16, 16]             512
         LeakyReLU-7          [-1, 256, 16, 16]               0
            Conv2d-8            [-1, 256, 8, 8]       1,048,832
       BatchNorm2d-9            [-1, 256, 8, 8]             512
        LeakyReLU-10            [-1, 256, 8, 8]               0
           Conv2d-11            [-1, 256, 4, 4]       1,048,832
      BatchNorm2d-12            [-1, 256, 4, 4]             512
        LeakyReLU-13            [-1, 256, 4, 4]               0
           Linear-14                  [-1, 256]       1,048,832
         Upsample-15            [-1, 256, 4, 4]               0
         Upsample-16            [-1, 256, 4, 4]               0
           Linear-17                   [-1, 27]           6,939
  ConvTranspose2d-18            [-1, 256, 8, 8]       1,048,832
      BatchNorm2d-19            [-1, 256, 8, 8]             512
        LeakyReLU-20            [-1, 256, 8, 8]               0
  ConvTranspose2d-21          [-1, 256, 16, 16]       1,048,832
      BatchNorm2d-22          [-1, 256, 16, 16]             512
        LeakyReLU-23          [-1, 256, 16, 16]               0
  ConvTranspose2d-24          [-1, 128, 32, 32]         524,416
      BatchNorm2d-25          [-1, 128, 32, 32]             256
        LeakyReLU-26          [-1, 128, 32, 32]               0
  ConvTranspose2d-27           [-1, 64, 64, 64]         131,136
      BatchNorm2d-28           [-1, 64, 64, 64]             128
        LeakyReLU-29           [-1, 64, 64, 64]               0
  ConvTranspose2d-30         [-1, 64, 128, 128]          65,600
      BatchNorm2d-31         [-1, 64, 128, 128]             128
        LeakyReLU-32         [-1, 64, 128, 128]               0
           Conv2d-33          [-1, 3, 128, 128]             195
  SkipNet_Decoder-34          [-1, 3, 128, 128]               0
  ConvTranspose2d-35            [-1, 256, 8, 8]       1,048,832
      BatchNorm2d-36            [-1, 256, 8, 8]             512
        LeakyReLU-37            [-1, 256, 8, 8]               0
  ConvTranspose2d-38          [-1, 256, 16, 16]       1,048,832
      BatchNorm2d-39          [-1, 256, 16, 16]             512
        LeakyReLU-40          [-1, 256, 16, 16]               0
  ConvTranspose2d-41          [-1, 128, 32, 32]         524,416
      BatchNorm2d-42          [-1, 128, 32, 32]             256
        LeakyReLU-43          [-1, 128, 32, 32]               0
  ConvTranspose2d-44           [-1, 64, 64, 64]         131,136
      BatchNorm2d-45           [-1, 64, 64, 64]             128
        LeakyReLU-46           [-1, 64, 64, 64]               0
  ConvTranspose2d-47         [-1, 64, 128, 128]          65,600
      BatchNorm2d-48         [-1, 64, 128, 128]             128
        LeakyReLU-49         [-1, 64, 128, 128]               0
           Conv2d-50          [-1, 3, 128, 128]             195
  SkipNet_Decoder-51          [-1, 3, 128, 128]               0
================================================================
Total params: 9,455,201
Trainable params: 9,455,201
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.19
Forward/backward pass size (MB): 78.28
Params size (MB): 36.07
Estimated Total Size (MB): 114.54
----------------------------------------------------------------
2 Likes

Thanks a lot, I have marked your answer as the solution

Your welcome, happy coding ~

@V_Deamo how have you changed the forward function ?