Propagate manually computed gradients

I want to backpropagate manually computed Jacobian J which consists of gradients.
Let’s say, my pipeline is composed of neural networks A, and its output is sent to B which is the differentiable model but does not have any learning parameters. Finally, compute the loss using the output from B:

input -> A -> B -> get output and compute loss L

A: composed of neural networks
B: a differentiable model with no learning parameters
J: Jacobian consists of gradients of the loss L w.r.t. the input to B (which is equivalent to the output from A)

Here, I compute J manually. Then I want to propagate the gradients from the very last layer of A to the input layer of A.
In this case, is “output_from_A.backward(J)” what I have to do?

Thanks in advance!

I’m not familiar with your use case, but given your description I think your suggestion sounds right.
output_from_A.backward() can take any gradients in the shape of output_from_A, so as long as this condition is met you could pass the manually gradients to this method.

1 Like