# Torch.sigmoid function gradient issue after first epoch (Trying to backward through the graph a second time)

Hi, I’m running the following code for an optimization problem. (The loss function here is just a simplified example). The loss depends on the values of the weights tensor, which is passed through a sigmoid layer and division to make sure: (1) Each entry is between 0, 1 (2) The vector sums to 1.

``````def test_weights(epochs):
optimizer = torch.optim.SGD([weights], lr=1e-2, momentum=0.9)

for e in range(epochs):

weights = torch.sigmoid(weights.clone())
weights = (weights / weights.sum()).clone()

error = weights[1] + weights[2]
error.backward()
optimizer.step()

print(weights[0:5])
return weights
``````

However, after printing the first epoch, I got the error:

Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.

Is there any way to solve this issue?

Try modifying your `loss.backward()` call with `loss.backward(retain_graph=True)`

1 Like

Just a question, why do you have your `weights` Tensor be of size 64 but only define the error of the first two? (granted they are normalised so there will be gradient for all gradients). I’m not 100% sure but doing `weights[1]` might be in-place so check your Tensor does indeed have a `grad_fn`. Otherwise, your gradient will be zero!

Also, you might want to reduce your learning rate and momentum constant, you’re likely converging directly into a local minima (or perhaps even diverging) and getting stuck. Try lr=1e-4 with/without momentum =0.9.