# How to build my own convolutional method in Pytorch?

Hello Im new to deeplearning
I want to test something so I want to make own convolution 2d method.
I want to design as follows. kernel size = T

1. Giving input Channel x H x W with kernel also Channel x T x T. Same as before
2. But each kernel with one multiplication gives output T x T not as a one number.
3. So It is like kind of projection to surface of TxT
killing channel dimension.

How can I modify in Pytorch but still having properties same like autograd or other stuffs.

You could use `unfold` to create the patches, apply your custom operation using your filters, and `fold` or reshape the output back.
This post gives you an example of unfolding the input.

Hello I’m beginner in DL&Pytorch and got helped so many times via other answers of yours in this page.
So it’s very glad to see you here first question of mine.

However, in this case unfold would might not be the case.
I am stocked because of custom operation gives one value through one element-wise multiplication, but what I want is to bring (kernel_size x kernel_size) output of each element-wise multiplication. After multiplying, just sum down in dimension of channel.

I’ve considered view or unfold, but I think it can not help.

This way I will get input size bigger, for example if 3x3 is kernel_size
Input_size will grow 3x224,3x224.(Start from 224*224)

And then calculating new convolution of 3x3 by stride=3 will give us the normal operation does.

The reason I want to do this process is because I want to quantize part of it. Not whole.
Is there a way to solve this?

Thank you.

How would you like to reshape the output?
`unfold` will create the patches and you could use it as an `im2col` method.
Once you get all patches you could apply the desired operation. Would you like to apply another convolution with `stride=3` afterwards?

I got the point of unfold & desired operation through it.
It could bring desired output for me.

``````    desired operation    stride=3 conv
``````

[ B, C, H, W ] >> [ B, C*, 3xH , 3xW ] >> [ B, C*, H , W ]

This is example of what I want in case of kernel=3, so I could quantize the weights in desired operation. After stride=3, it is just same with normal convolution.
.

However I want my function to get weights & internally do all work like padding extra like nn.conv2d does, but it is hard for me to build from start what does in nn.conv2d does exactly So what I tried was putting my function to nn.modules.conv.py function, but it did not work also.

So , I’ve been trouble still
Is there no way to just modify nn.conv2d to serve as desired function?

It’s flow for simple case for stride=1, I just wrote it for in case, lack of explanations above.

for batchs in range(batch):
…for channels in range(channel):
ㄴ…for i in range(move):
ㄴㄴ…for j in range(move):
ㄴㄴㄴ…out [batchs, channels, kernel * i: kernel * i+kernel, kernel * j: kernel * j+kernel]=
ㄴㄴㄴ…torch.sum(input[batchs, :, i:i+kernel, j:j+kernel] * weigt[channels,:,:,:], dim=1)

You could try to manipulate the native conv2d implementation, but note that you won’t be able to manipulate the backend implementations from e.g. cudnn.

I’m also not sure, if that would be easier than to write out your operations manually.
The `padding` argument in `nn.Conv2d` can be simply added to the unfolding approach using `F.pad`.
`stride`, `kernel_size` and `dilation` are all set in the `unfold` call.

You could of course write the nested loop as a CPU/CUDA extension and use it in your Python script as explained here.

1 Like

Oh, I’ve got it now,
Thank you very much.
You’ve made my day 1 Like