Pytorch update hyper-parameter using current loss, RuntimeError: Trying to backward through the graph a second time

I am defining my own loss function, and my own loss function has a hyper-parameter Lambda. For example, if the prediction is y, then I define the loss function as Loss = Lambda * y. I want to update my Lambda at some iteration using the current round’s Loss. For example, at some specific iteration, I want my Lambda to be updated as Lambda = Lambda + Loss, then it returns the error of

RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.

Specifically, my naive code is as follows:

import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim

batch_size, input, output = 10, 3, 3
model = nn.Linear(input, output)
optimizer = optim.Adam(model.parameters(), lr=1e-3)
lam = torch.from_numpy(np.array([0.1, 0.1, 0.1]))
lam.requires_grad = False

for i in range(10):
    x = torch.rand(batch_size, input)
    output = model(x)
    loss = torch.sum(lam*output)
    if i == 5:
        lam = lam + torch.clone(loss)
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()
    print(loss)

I had the feeling that the error was caused by using Loss to update my Lambda. So I used the code torch.clone(loss), hoping not to influence loss, it didn’t help. Does anyone know how to fix the problem? Some explanation about why this error occurs would be great!

Since you are manually updating lam, try to wrap the update into with torch.no_grad() to avoid tracking this operation inside a computation graph.