Custom Gradients

Hello Everyone,
My neural network is optimized in the way that the gradients are obtained by a certain algorithm. To pass the gradients to the network, I firstly clear the gradients by:

(1), for i in list(net.parameters()):
i.grad=None
(2), opt.zero_grad()

I assume that both methods do the same thing and have the same effect, is my understanding right?
After clearing the gradients, I pass the gradients ( the calculated gradients are in list type, and each element is an numpy array that has the same shape with the network element ) to the network by:

(1),for i in list(net.parameters()):
i.grad=Variable(torch.from_numpy(GRADIENT_ARRAY))

After passing the gradients, I use optimizer.step() to update the parameters. Does calling optimizer.step() do the same thing as updating the parameters manually?

yes, optimizer.step() does the update rule: x.data -= x.grad * learning_rate being a simple example of an update rule.

1 Like