Optimizer related

Hi, I’m sorry if this is not the right place to ask this question.

Mainly for a fully connected NN, can I write my own weight update function (I dont want to use gradients, I believe in optimizer when backpropping) according to the input/outputs and weights for each perceptron at current iteration? I am going to have the same update function for every perceptron. Can I update layers in any order I want with Pytorch?

Thank you.

You can write your own update function for sure !

To update your weights, you might use the optimiser library. But you can also do it yourself. For example, you can basically code the gradient descent, the SGD or Adam using the following code.

net = NN()
learning_rate = 0.01
for param in net.parameters():
    weight_update = smth_with_good_dimensions
    param.data.sub_(weight_update * learning_rate)

As you can see, you have access to your parameters in net.parameters(), so you can update them like you want.

1 Like