I specified "retain_graph=True" but still get error "Trying to backward through the graph a second time, but the buffers have already been freed"

I am looking at gradient-based adversarial attack code
( setting data.requires_grad =True -> and get the input(data) gradients that maximize loss function, not model gradients)

and get

RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time.


data_grad = torch.autograd.grad(loss, data, retain_graph=True, create_graph=True)[0]

when I searched the solution to that runtime error, the answer was to specify retain_graph = True
but I already specified retain_graph=True in that code.

I guess the problem is that
I used a custom model.

CustomModel is {{modelA. modelB), Last Classifier to combine Model A & B (Working as Ensemble) }
and when the data is forwarded, I changed the data differently on Model A and Model B. I mean, I used different functions on Model A, and Model B when I forwarded data in CustomModel.
Those functions are not self parameters

Is it impossible to get gradient ( even though it is not model gradient )
when I use a custom model in which functions that are not parameters are used in forward ?
If there is a way, Can you tell me how to solve this error?

You need to specify it the first time you called autograd.grad or .backward. Not the one that raises the error.

Also if you have your training loop, you should avoid doing differentiable computations outside of the loop. As it will also create part of the graph that will be re-used from one iteration to the next.