Backpropagating with gradient function instead of loss function

Hi,

I have a model and i need to train it with a self calculated gradient instead of a loss function but how do i backpropagate this gradient? I have a model and optimizer:

model = Model()
optimizer = optim.Adam(model.parameters(), lr=1e-4)

now i can calculate the output and the gradient:

optimizer.zero_grad()
out = model(input)
gradient = my_custom_gradient_function(out)

And normally i use now loss.backward() and optimizer.step() when i have a loss function but how can i backpropagate now with the calculated gradient?

1 Like

Is using the following correct?:

out.backward(gradient)
optimizer.step()

and if yes why and what is it doing exactly?

Hi,

Yes that looks good.
Another option is to write a custom Function as described here. And then you can wrap your loss function with this.

1 Like