RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time. WHY?

I am getting the next problem:
RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time.
I could fix it using: error.backward(retain_graph=True)

But why does happen? I did not call twice inside of the loop.
I have to say that in each iteration I am using the same variables: value_path and value_body

	for i_emb in range(emb_times):
		optimizer_s.zero_grad()
		out_data = net(value_path,value_body)
		out_constant = (out_data.div(out_data.norm(p=2, dim=1, keepdim=True))).detach()
		error = ((out_data - out_constant).pow(2)).sum()
		error.backward(retain_graph=True****************)
		optimizer_s.step()

I do not understand why that happens.

Hi,

Where does “value_path” and “value_body” come from? Do they require gradients? If they do, whatever created them is a part of the gradient that will be common to all the iterations of the loop, hence the error.

Hi AlbanD

Both of them (value_path, value_body) are a coming from the raw data, there is not a graph calculation.
Anyway I just modify the initial definition.

value_body = torch.zeros((batch, 1, size_body, size_emb_body), device = ‘cuda’, requires_grad=False)
value_path = torch.zeros((batch, 1, size_path, size_emb_path), device = ‘cuda’,requires_grad=False)

And I still need to define “error.backward(retain_graph=True)” if not, the ERROR shows up. :frowning: