RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [5, 1]], which is output 0 of TBackward, is at version 2; expected version 1 instead. Hint: enable anomaly detection

Hi I am facing the above mention problem while calling the run_on_batch function (code is given below). The details of the error in given at the last. If someone help me then it will be very nice.

Note: I am using the adam optimizer for both the generator and discriminator [optimizer = optim.Adam(model.parameters(), lr = 1e-3)
optimizer_d = optim.Adam(discriminator.parameters(), lr = 1e-3)]

ret_f, ret, disc = run_on_batch(model,discriminator,data,mask,decay,rdecay, args, optimizer,optimizer_d,epoch)
def run_on_batch(model,discriminator,data,mask,decay,rdecay,args, optimizer,optimizer_d,epoch):
       ret_f,ret = model(data, mask, decay,rdecay,args)
       disc = discriminator(ret['originals'], mask, args)
       print("BATCH LOSS",ret['loss'])
       print("BATCH LOSS",disc['loss_g'])
       print("BATCH LOSS",disc['loss_d'])
       if optimizer is not None:

           if (epoch%10==0):
               optimizer_d.zero_grad()
               disc['loss_d'].backward(retain_graph=True)
               optimizer_d.step()

           optimizer.zero_grad()
           (ret['loss']+disc['loss_g']).backward()
           optimizer.step()

       return ret_f,ret,disc
Traceback (most recent call last):
  File "/home/jyotirmaya/Work/deep-learning-based-packet-imputation/BiGAN/biGan/main_ganOrigActivity.py", line 772, in <module>
    run(ARGS)
  File "/home/jyotirmaya/Work/deep-learning-based-packet-imputation/BiGAN/biGan/main_ganOrigActivity.py", line 738, in run
    trainLoss,discLoss,gLoss, valLoss,discValLoss,gValLoss = run_epoch(args, model, discriminator)
  File "/home/jyotirmaya/Work/deep-learning-based-packet-imputation/BiGAN/biGan/main_ganOrigActivity.py", line 669, in run_epoch
    ret_f, ret, disc = run_on_batch(model,discriminator,data,mask,decay,rdecay, args, optimizer,optimizer_d,epoch)#,bmi_norm)
  File "/home/jyotirmaya/Work/deep-learning-based-packet-imputation/BiGAN/biGan/bgan_i_ganOrig.py", line 231, in run_on_batch
    (ret['loss']+disc['loss_g']).backward()
  File "/home/jyotirmaya/.local/lib/python3.8/site-packages/torch/tensor.py", line 245, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
  File "/home/jyotirmaya/.local/lib/python3.8/site-packages/torch/autograd/__init__.py", line 145, in backward
    Variable._execution_engine.run_backward(
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [5, 1]], which is output 0 of TBackward, is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

Ret to replace inplace operations (e.g. a += b) with their out-of-place versions (e.g. a = a + b) and see if this would fix the error. Also, check if you are explicitly using inplace versions of some ops indicated by the underscore at the end e.g. a.add_(b).