Backpropagate a fixed gradient through a network

Thanks for the help. I understand you must be busy. I just wanted to know what the error was.
So,
out1 = net1(input) err1 = net2(out1) out2 = net3(out1) newd = out2.data - out1.data torch.autograd.backward([err1, out1], [grad_out, -someWeight * newd], retain_=True)
Here, out2 and out1 are of same dimension and grad_out is a tensor. Let me know if you need more info. Thanks!