Forward and backward implementation of max pool 2d


I’d like to extend max pooling 2d with a new idea. However, for this I need the extend the forward and backward pass of max pooling. Ideally, I would use the cudnn implementation to compute the forward and backward pass of max pooling, but as far as I can see, these are not exposed in the python API. I’ve been looking through the source code, but I’m unable to find the implementation of max_pooling2d. In short: Is there a way to expose the forward and backward pass implementation to the python api? And related: how is max_pool2d implemented, as far as I understand the functions are automatically generated but further than that I’m very confused.


1 Like

this says it is implemented as SpatialDilatedMaxPooling. You can find the code for that here and here

I’m not sure what your use case is but if it’s simple you can just perform operations to the output of max_pool2d, not worry about implementing a backward, and autograd will take care of it.

1 Like

Thanks, that’s very helpful.

My case is basically the following: I want to obtain max pooling indices from a tensor A, and then get the values at the locations of those indices in a tensor B of the same dimensionality. So far, I haven’t figured how to do this and was thinking I could do this by using the fwd/bwd implementations of max_pooling2d, but maybe I’m making things to complicated?

I was able to do this using torch.gather. No need to dive into pytorch internals.

Hey, I also want to extend PyTorch’s maxpool implementation. The links mentioned here seems to be obsolete now. Can you help me with the current location of the implementation in the library and also what is the process of compiling and running the updated code.
Thanks in advance :slight_smile:

The CPU implementation should be located here and the CUDA one here.

Check the Contributing docs to see how to incrementally rebuild (and test) your changes. :slight_smile:

Thanks, that was helpful :slight_smile: