Custom pooling/conv layer

If I want to implement some custom pooling or convolution layer, like the gated or tree pooling functions as shown in this article, do I have to modify the c++ code, instead of some simpler extension in python? I found the official website on ‘Custom C++ and CUDA Extensions’, but it seems not sufficient for my goal.

5 Likes

I am also interested in this, have you figured it out?

Custom convolution layer can be implemented either by inheritance from the class nn.Conv2d, or through the funciton unfold. Unfortunately, unfold do not support dilation.

You may want to check these two topics:

Custom pooling can be implemented like this:

3 Likes

What is the issue you are seeing?

inp = torch.randn(1, 3, 10, 12)
w = torch.randn(2, 3, 4, 5)
inp_unf = torch.nn.functional.unfold(inp, (4, 5), dilation=(2,2))
out_unf = inp_unf.transpose(1, 2).matmul(w.view(w.size(0), -1).t()).transpose(1, 2)
out = torch.nn.functional.fold(out_unf, (4, 4), (1, 1))
(torch.nn.functional.conv2d(inp, w, dilation=(2,2)) - out).abs().max().item()

seems to work for me.

Best regards

Thomas

nn.functional.unfold

only supports four dim case.

Indeed, from your message, I took the Conv2d class as an indication that you want that. But in the other thread you specifically say non-4d.

Best regards

Thomas

Sorry for the ambiguity and thanks for clarification.

I am also trying to implement gated pooling and I am quite badly stuck, have you implemented this?
Help would be greatly appreciated.