Hi there. Let’s say we obtain a set of gradients for a set of weights as follows:

```
grad = torch.autograd.grad([output], [m for m in self.parameters()],
grad_outputs=[grad_input],
create_graph=True, retain_graph=retain_graph,
only_inputs=True)
for i, m in enumerate(self.parameters()):
m.data = m.data - lr * grad[i]
```

Let the model that had its weight updated by `net`

. We then perform a forward pass using the weight updated model as `net(input)`

. As I’ve retained the graph in the `autograd`

performed above, when I’m performing `loss.backward()`

, I’d like the gradient to flow through the `grad_input`

(backproped from the `grad[i]`

's) variable mentioned above. Any idea how to achieve this?

Here’s a simple flow of the program:

```
gradOut = self.netA(input_data)
self.netB.paramUpdate(gradOut) # performs the code snippet provided above
out = self.netB(input_data)
loss = self.criterion(out, label)
self.optimizer.zero_grad()
loss.backward() # I hope that the gradients can flow from netB to netA
```