Hi!
I have faced a frequent problem:
one of the variables needed for gradient computation has been modified by an inplace operation:
but in an unusual case. The code below reproduces the error. Error occur in output_2.backward()
So, what’s wrong?
import torch
from torch import nn
model_1 = nn.Linear(1, 1, bias=False)
model_2 = nn.Linear(1, 1, bias=False)
opt_1 = optim.Adam(model_1.parameters())
opt_2 = optim.Adam(model_2.parameters())
input = torch.Tensor([1.])
output_1 = model_1(input)
opt_1.zero_grad()
output_1.backward(retain_graph=True)
opt_1.step()
output_2 = model_2(output_1)
opt_2.zero_grad()
output_2.backward()
opt_2.step()