Global sum pooling

How can I perform global sum pooling in pytorch (with and without the view() function).

Assuming the same operation is applied as in a “global avg pooling” layer w.r.t. the shape, you could just use: x = x.sum(dim=1) where dim is specified as the dimension you would like to reduce.

just this works for me x=x.sum(0)
is that correct?

If dim0 is the batch dimension your code looks wrong since you would not reduce in the batch dimension but in the others (i.e. channels, spatial dimensions, feature dimension etc.).

x= x.sum(-2) works also with my code.

I got the below error if I choose a value not between [-2,1]

IndexError: Dimension out of range (expected to be in range of [-2, 1], but got 3)

And the below error with dimension =1

RuntimeError: mat1 and mat2 shapes cannot be multiplied (1x8 and 16384x1)

How can I solve that?

Sure, you are more familiar with your use case so take my post with a grain of salt. :wink:
I don’t know which operation causes the shape mismatch and also don’t know if you are even dealing with a batch dimension. Just check the shape before and after the reduction and make sure the shapes are explainable and make sense. E.g. if you are dealing with a simple model using linear layers, using sum(0) on an activation would remove the batch dimension and you would have to think about if that’s the right approach as your model would “map” N samples to a single output.

what is the important of the global sum pooling layer?