For a project that I have started to build in PyTorch, I would need to implement my own descent algorithm (a custom optimizer different from RMSProp, Adam, etc.). In TensorFlow, it seems to be possible to do so (https://towardsdatascience.com/custom-optimizer-in-tensorflow-d5b41f75644a) and I would like to know if it was also the case in PyTorch.
I have tried to do it by simply adding my descent vector to the leaf variable, but PyTorch didn’t agree: “a leaf Variable that requires grad has been used in an in-place operation.”. When I don’t do the operation in-place, the “new” variable loses its position of leaf, so it doesn’t work neither…
Is there an easy way to create such a custom optimizer in PyTorch?
You can write your own update function for sure !
To update your weights, you might use the optimiser library. But you
can also do it yourself. For example, you can basically code the
gradient descent, the SGD or Adam using the following code.
net = NN()
learning_rate = 0.01
for param in net.parameters():
weight_update = smth_with_good_dimensions
param.data.sub_(weight_update * learning_rate)
As you can see, you have access to your parameters in net.parameters(), so you can update them like you want.