Hi,

my question is whether it is possible to load parameters into a model, such that they are still in the computation graph of the result of the forward function of said model.

Let’s say for example

```
import torch
import torch.nn as nn
from torch.autograd import grad
class SimpleNet(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(4, 1)
def forward(self, x):
return self.fc1(x)
model = SimpleNet()
new_theta = {n: p * 2 for n, p in model.named_parameters()}
model.load_state_dict(new_theta)
x = torch.randn(4)
result = model(x)
grad(result, [p for p in new_theta.values()], allow_unused=True)
```

The grad at the end gives None for all tensors, because they are detached from the computation graph that calculates result, i.e. the forward function (that’s what I think at least).

Is it somehow possible to load parameters into the model so that this gradient will get populated ?