I want to backpropagate manually computed Jacobian J which consists of gradients.
Let’s say, my pipeline is composed of neural networks A, and its output is sent to B which is the differentiable model but does not have any learning parameters. Finally, compute the loss using the output from B:
input -> A -> B -> get output and compute loss L
A: composed of neural networks
B: a differentiable model with no learning parameters
J: Jacobian consists of gradients of the loss L w.r.t. the input to B (which is equivalent to the output from A)
Here, I compute J manually. Then I want to propagate the gradients from the very last layer of A to the input layer of A.
In this case, is “output_from_A.backward(J)” what I have to do?
Thanks in advance!