Hi, I’m running the following code for an optimization problem. (The loss function here is just a simplified example). The loss depends on the values of the weights tensor, which is passed through a sigmoid layer and division to make sure: (1) Each entry is between 0, 1 (2) The vector sums to 1.

```
def test_weights(epochs):
weights = torch.rand(64, requires_grad= True)
optimizer = torch.optim.SGD([weights], lr=1e-2, momentum=0.9)
for e in range(epochs):
optimizer.zero_grad()
weights = torch.sigmoid(weights.clone())
weights = (weights / weights.sum()).clone()
error = weights[1] + weights[2]
error.backward()
optimizer.step()
print(weights[0:5])
return weights
```

However, after printing the first epoch, I got the error:

Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.

Is there any way to solve this issue?