# How to manually update network parameters while keeping track of its computational graph?

I am trying to implement MAML. With simplification, these are the three operations I wish to implement where theta is the weights of a neural network.
$\dpi{100}&space;\small&space;\begin{}\\&space;\theta_1&space;\leftarrow&space;\theta&space;-&space;\alpha&space;\frac{\partial&space;f(\theta)}&space;{\partial&space;\theta}\\&space;\theta_2&space;\leftarrow&space;\theta_1&space;-&space;\alpha&space;\frac{\partial&space;f(\theta_1)}{\partial&space;\theta_1}\\&space;\theta&space;\leftarrow&space;\theta&space;-&space;\alpha&space;\frac{\partial&space;f(\theta_2)}{\partial&space;\theta}&space;\end{}$
Assumes constant input i.e. f(theta) = loss(net(in_)) because we only interested in the gradients w.r.t. weights.

Here’s my code snippet

temp_net = copy.deepcopy(net)

## First formula
loss_1 = loss(net(in_))

new_param = param - lr * grad
param.copy_(new_param)

## Second formula
loss_2 = loss(temp_net(in_))


Last line throws an error because the computational graph disconnects when I use with torch.no_grad(). I can’t compute gradient w.r.t. net.parameters(). However, if I remove the torch.no_grad(), it throws an error RuntimeError: a leaf Variable that requires grad is being used in an in-place operation.
Seeing from learn2learn implementation of MAML, turns out we can manually do in-place update of the parameters (not using .copy_()) to maintain the computational graph. However, it requires us to recursively check the ._parameters property of each _modules of our model. This is the manual in-place update and this is the recursive part.