I want to reverse the gradient sign from the output of model net1 but calculate net0(inp0) as usual.
In simple case, I would do
out0 = net0(inp0)
out1 = net0(net1(inp1))
loss = criterion(out0, out1, target)
loss.backward()
[p.grad.data.neg_() for p in net1.parameters()]
opt.step()
Can I do some hook to reverse gradients from output of net1?
Oh sorry I tend to read partially
The approach you are following is the simplest one IMO. In fact when you call backward you can see an iterative process like that so I don’t think it would have the performance.
As an alternative to using a hook, you could write a custom Function
whose forward() simply passes through the tensor(s) unchanged, but
whose backward() flips the sign of the gradient(s).
You would then insert it at the desired place in your network, e.g.:
Much cleaner with function, because I can’t find tutorials or examples of hook use.
So I need to apply the function on the output tensor and all previous calculations will be with reversed gradients, right?
Also, while googling function, I found this example