Summation of output of previous layer


I want to do summation of outputs of several convolution layer and send generated output to the next layer. Is there sum layer (sth like pooling layer, convolution layer, …)?!
How could I do it?


Maybe linear layer with bias = 0 do the same. Although activation function changes it to non-linear value. Maybe there are not any way, isn’t it? Unless, I customize what I want…

How would you like to sum the conv layer outputs?
If you would like to sum over a specific dimension, you could simply use:

output = conv(input)
output = output.sum(dim=1)  # change the dim to your use case

On the other hand, if you would like to sum patches similar to a conv layer, you could define a convolution kernel with all ones, and apply it using the functional API:

output = torch.randn(1, 2, 8, 8) # comes from a preceding conv layer

sum_kernel = torch.ones(1, 2, 3, 3)
output_sum = F.conv2d(output, sum_kernel, stride=1, padding=1)

Dear Ptrblck,

Thanks a lot. Like every time your answer gave me an idea to solve my problem :slightly_smiling_face:

Best Regards