Hi all,

I’d like to compute the second derivative of the loss w.r.t. a model (nn.module), but I cannot do this:

```
params = torch.cat([p.flatten() for p in policy.parameters()], dim=0)
for i in range(200):
y = get_expected_return(policy)
grad = torch.autograd.grad(y, params, create_graph=True)[0]
with torch.no_grad():
params += grad
```

So, I need to do this:

```
for i in range(200):
y = get_expected_return(policy)
grad = torch.autograd.grad(y, policy.parameters(), create_graph=True)
with torch.no_grad():
for p, g in zip(policy.parameters(), grad):
p += g
```

But, the first solution is required for the second derivative. So, anybody knows how to make the first code work?