I was trying to .backward() a loss like following.
model = model(2,2,1,2) #suppose model is just a linear model with 2 input, 2 neuron in 1 hidden layer, 1 output
x = torch.Tensor([[0.3,0.52],[-0.1,0.2]])
'this part try to calculate the second derivative with respect to g'
g = x.clone()
g.requires_grad = True
UF = model.forward(g)
tmp = torch.autograd.grad(outputs = UF, inputs = g, grad_outputs = torch.ones(UF.size()), retain_graph=True, create_graph=True)[0]
UFxx = torch.autograd.grad(outputs = tmp, inputs = g, grad_outputs = torch.ones(tmp.size()))[0][:,[0]]
''
U = model.forward(x)
F =U*UFxx
lossF = (F**2).mean()
lossF.backward()
However, the following error comes out,
Trying to backward through the graph a second time, but the saved intermediate results have already been freed. Specify retain_graph=True when calling .backward() or autograd.grad() the first time.
if I modified as following,
UFxx = torch.autograd.grad(outputs = tmp, inputs = g, grad_outputs = torch.ones(tmp.size()), create_graph=True)[0][:,[0]]
then the code run fine.
My question is that, why I need to set the “create_graph=True”, I already have the U part which will give me the graph and the UFxx should be just a number?
U = model.forward(x)
F =U*UFxx