Hi,
I have done that but forgot to attach the error. Here it is:
File "venv\lib\site-packages\torch\nn\modules\module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "src/del.py", line 76, in forward
z = self.fc2(y.clone())
File "venv\lib\site-packages\torch\nn\modules\module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "venv\lib\site-packages\torch\nn\modules\linear.py", line 91, in forward
return F.linear(input, self.weight, self.bias)
File "venv\lib\site-packages\torch\nn\functional.py", line 1674, in linear
ret = torch.addmm(bias, input, weight.t())
So that means that an input to your linear layer was modified inplace.
Since from the stack trace you clone the input to make sure it is not this one, then I guess it is the weights, which are most like modified by your optimizer when you do optimizer.step() ?
What I mean is that optimizer.step() is an inplace operation. So if you try to backward again after doing the optimizer step without re-doing the forward, you will see this error.
Is your code doing something like:
out = model(inp)
first_loss = bar(out, label)
first_loss.backward(retain_graph=True)
opt.step()
opt.zero_grad()
second_loss = baz(out, label)
second_loss.backward() # This will fail if model has a Linear
# Because the Linear's weight were modified inplace above and
# this backward cannot be done anymore.