Can pytorch support manually assign grad_output like torch?
In torch, each nn.Module have backward(self, grad_output)
method, so that we can provide our manually designed grad_output.
To be more specifically,
z = g(f(x))
, then from chain rule: dz/dx = dg(f(x))/df(x) * df(x)/dx
.
In pytorch autograd, we just need to do z.backward()
to do the back-propagation. If I want to manually provide k = dg(f(x))/df(x)
, and perform dz/dx = k * df(x)/dx
, how should I do?
Thanks in advanced!
1 Like
and Iām not sure is if register_backward_hook
will help
you can also do: z.backward(grad_output)
instead of z.backward()
on the last variable.
1 Like