Backward for composition loss

I was trying to .backward() a loss like following.

model = model(2,2,1,2) #suppose model is just a linear model with 2 input, 2 neuron in 1 hidden layer, 1 output
x = torch.Tensor([[0.3,0.52],[-0.1,0.2]])

'this part try to calculate the second derivative with respect to g'
g = x.clone()
g.requires_grad = True
UF = model.forward(g) 
tmp   = torch.autograd.grad(outputs = UF, inputs = g, grad_outputs = torch.ones(UF.size()), retain_graph=True, create_graph=True)[0]
UFxx  = torch.autograd.grad(outputs = tmp, inputs = g, grad_outputs = torch.ones(tmp.size()))[0][:,[0]]

''
U = model.forward(x) 
F     =U*UFxx
lossF = (F**2).mean()
lossF.backward()

However, the following error comes out,

Trying to backward through the graph a second time, but the saved intermediate results have already been freed. Specify retain_graph=True when calling .backward() or autograd.grad() the first time.

if I modified as following,

UFxx  = torch.autograd.grad(outputs = tmp, inputs = g, grad_outputs = torch.ones(tmp.size()), create_graph=True)[0][:,[0]]

then the code run fine.

My question is that, why I need to set the “create_graph=True”, I already have the U part which will give me the graph and the UFxx should be just a number?

U = model.forward(x) 
F     =U*UFxx

Hi,

There are two things:

  • create_graph=True means that you want the graph to be created while doing the backward so that the gradients have requires_grad=True.
  • retain_graph=True means that you don’t want the graph to be deleted because you want to call backward on it again.

Note that create_graph=True forces retain_graph=True.

So in this case, if you plan to do multiple backwards, you should use retain_graph=True. But if you plan to backward through the first backward, you should use create_graph=True.