How To Implement Custom Layer And Update weight In Pytorch

Hello,
I am trying to create some layers. Let we said convolutional layer but will have some logic operation inside of it. Basically I can do that using tensor operation and create it as function inside the class. However I am so confuse how can I update the weight of my custom convolution filter?
As far as I know most of people implement the new layer and they will still using torch.nn.functional so how can I do this?

I have read some threads like here , here , also here .

Or do we just use nn.Parameter is enough like in here .

I am still confuse where to start, suggestion to start is welcome.

2 Likes

Implementing the custom layer : http://pytorch.org/tutorials/beginner/pytorch_with_examples.html#pytorch-custom-nn-modules
This is a very good reference and in fact it was present in one of the links you mentioned.
You basically need to write the implementation of your custom layer in the forward() function.
The weight update would be taken care by the autograd.

I hope this helps.

1 Like

Hi Skand,
Do you know how we can go about creating a custom layer that does not need a backward method? The gradients should just flow through it when back propagating. Any help appreciated!

Hi @nabsabs,
You can simply freeze the layer weights if you don’t want any change in them. The back propogation would not change any value of that layer’s weights.

1 Like

Hi @pvskand thanks for the quick reply! Can you elaborate, maybe with an example, on how to freeze the layers? If I don’t write a backward method, does that count as not changing the value of the layer’s weights?

You can have a look at this link for freezing of weights.
No, not writing a backward method does not necessarily mean that the weights would not change. If the forward method of your layer has differentiable operations then, autograd takes care of backward by default.

1 Like