One of the variables needed for gradient computation has been modified by an inplace operation (backward pass is performed twice)

Hi Srushti!

Your immediate problem is likely here. You are calling loss.backward()
with retain_graph = True. First, you should think carefully about whether
you need retain_graph = True, and if so, why.

In the last iteration of your enumerate (train_loader) loop, you build a
computation graph that connects output to the parameters of model4. This
graph is preserved. In general, optimizer.step() modifies the parameters
of your model inplace. When closure() is then executed, it calls backward()
on loss_fun (output, target), which will backpropagate through the graph
that connects output to the parameters of model4. But those parameters
have been modified inplace, causing the error.

The forward-call traceback generated by set_detect_anomaly (True) is
complaining about the call to fc2 (x) in your model4. This agrees with the
above analysis in that that fc2.weight has been modified inplace by the call
to optimizer.step().

To fix this, you will need to think through the logic of your use case. Does
optimizer_qnn.step (closure) need gradients of output with respect
to the “regular” parameters of model4, such as fc2? If so, would it be practical
to rebuild the model4output graph after calling optimizer.step(), perhaps
inside of closure()?

For some examples that show how to debug and fix inplace-modification errors,
see this post:

Good luck!

K. Frank