Backpropagating with gradient function instead of loss function


I have a model and i need to train it with a self calculated gradient instead of a loss function but how do i backpropagate this gradient? I have a model and optimizer:

model = Model()
optimizer = optim.Adam(model.parameters(), lr=1e-4)

now i can calculate the output and the gradient:

out = model(input)
gradient = my_custom_gradient_function(out)

And normally i use now loss.backward() and optimizer.step() when i have a loss function but how can i backpropagate now with the calculated gradient?

Is using the following correct?:


and if yes why and what is it doing exactly?


Yes that looks good.
Another option is to write a custom Function as described here. And then you can wrap your loss function with this.