Error - "one of the variables needed for gradient computation has been modified by an inplace operation" (not resolved using previous such posts)

I am trying to use the code from vaegan-pytorch/VAEGAN_Implementation_128.ipynb at master · escuccim/vaegan-pytorch · GitHub to train a VAEGAN.
This code is from 2 years ago and raises the error “one of the variables needed for gradient computation has been modified by an inplace operation”. This error is not encountered in PyTorch 1.4 but I would like to know how to deal with it in the latest versions.

I have looked for suggestions in many topics available on Github and Pytorch Discussions and got some like

  1. Detaching/Cloning variables - This process doesn’t give an efficient solution because there are terms like the mse_loss (which uses output from the discriminator) that are added to the encoder’s loss and thus don’t allow easy detaching (would require multiple runs of the model per forward pass to use detach)
  2. Calculate losses first and then do backward().
  3. Use the inputs argument in backward to update only the required model component (encoder, decoder or discriminator).
    This is what I tried to replace the corresponding part of the source code by:
loss_encoder.backward(inputs=list(net.encoder.parameters()))
optimizer_encoder.step()
net.zero_grad()

if train_dec:
    loss_decoder.backward(inputs=list(net.decoder.parameters()))
    optimizer_decoder.step()
    net.discriminator.zero_grad()
if train_dis:
    loss_discriminator.backward(inputs=list(net.discriminator.parameters()))
    optimizer_discriminator.step()

However, this ended up giving the same error. Please guide on how this can be dealt with

Turns out that with a combination of the “inputs” and “retain_graph”, it works.
I also noted that the problem was because the updates that I had made to the code were not working live on Colab.